88 research outputs found

    Educating cultural heritage information professionals for Australia's galleries, libraries, archives and museums

    Get PDF
    This research explored the skills, knowledge and qualities, and professional education needs, of information professionals in galleries, libraries, archives and museums (GLAM) in Australia.  These cultural heritage institutions have always had a role in allowing us to experience, explore and interpret our world by enabling people to engage with information in multiple forms through their mutual core functions of acquiring, organising, storing, providing access to and preserving information.   With the advent of the digital environment, the role of the information professional has grown, but so too have the opportunities for making the collections of Australia’s cultural heritage institutions available, including the increased ability for collaboration and convergence between institutions.  The need to educate information professionals who can operate across these blurred cultural heritage boundaries is becoming paramount if we are to maximize the use of our rich collections of cultural heritage information.   This research identified similarities in skills, knowledge and qualities using the Grounded Delphi method, a relatively new methodological extension of the Delphi method.  It integrates aspects of Grounded Theory – particularly with respect to the data analysis - with the Delphi method, a group communication tool and a means to achieve consensus.  The process consisted of three rounds of data collection: this first was exploratory focus groups, followed by two rounds of online questionnaires. In keeping with Delphi procedures, an ‘a priori’ consensus level was set at 75%.  Of the 74 questions that participants had to answer, 57 reached consensus.   The findings revealed that although full convergence of galleries, libraries, archives and museums is unlikely, many of the skills, knowledge and qualities would be required across all four GLAM sectors.  However, some skills may require a ‘change of focus’ in the digital environment.  Key findings included the need to ‘understand why we do what we do’; ‘understand the broad purpose of our role’; ‘the need to better articulate the profession’s existence and its role in social capacity building’; and the need for broader, more generalist skills, but without losing any specialist capacity.  The findings provide the first empirically based guidelines around what needs to be included in an educational framework for information professionals who will work in the emerging GLAM environment.  A further recommendation is to consider establishing an undergraduate degree where the broader, cross-disciplinary skills and knowledge are taught in an Information Management/ Informatics focussed program.   As the first study of GLAM education requirements in Australia and the wider Asia-Pacific region to take a holistic approach by engaging information professionals across all four types of cultural heritage institutions, this thesis makes a significant contribution to the GLAM research field and to information education generally

    European Curriculum Reflections on Library and Information Science Education

    Get PDF
    The project behind this book has been carried out with the support of the European Community in the framework of the Socrates programme. The European Curriculum Reflections on Library and Information Science Education project has been inspired by curriculum discussions on the Bologna Declaration that was initiated at a EUCLID conference in Thessaloniki 2002. EUCLID (European Association for Library & Information Education and Research) is an independent European non-governmental and non-profit organisation existing for the purpose of promoting European co-operation within library and information education and research

    Collective Privacy Recovery: Data-sharing Coordination via Decentralized Artificial Intelligence

    Full text link
    Collective privacy loss becomes a colossal problem, an emergency for personal freedoms and democracy. But, are we prepared to handle personal data as scarce resource and collectively share data under the doctrine: as little as possible, as much as necessary? We hypothesize a significant privacy recovery if a population of individuals, the data collective, coordinates to share minimum data for running online services with the required quality. Here we show how to automate and scale-up complex collective arrangements for privacy recovery using decentralized artificial intelligence. For this, we compare for first time attitudinal, intrinsic, rewarded and coordinated data sharing in a rigorous living-lab experiment of high realism involving >27,000 real data disclosures. Using causal inference and cluster analysis, we differentiate criteria predicting privacy and five key data-sharing behaviors. Strikingly, data-sharing coordination proves to be a win-win for all: remarkable privacy recovery for people with evident costs reduction for service providers.Comment: Contains Supplementary Informatio

    Organizing scientific data sets: studying similarities and differences in metadata and subject term creation

    Get PDF
    According to Salo, the metadata entered into repositories aredisorganized and metadata schemes underlying repositories are arcane. This creates a challenging repository environment in regards to personal information management (PIM) and knowledge organization systems (KOSs). This dissertation research is a step towards addressing the need to study information organization of scientific data in more detail. METHODS: A concurrent triangulation mixed methods approach was used to study the descriptive metadata and subject term application of information professionals and scientists when working with two data sets (the bird data set and the hunting data set). Quantitative and qualitative methods were used in combination during study design, data collection, and analysis. RESULTS: A total of 27 participants, 11 information professionals and 16 scientists took part in this study. Descriptive metadata results indicate that information professionals were more likely to use standardized metadata schemes. Scientists did not use library-based standards to organize data in their own collections. Nearly all scientists mentioned how central software was to their overall data organization processes. Subject term application results suggest that the Integrated Taxonomic Information System (ITIS) was the best vocabulary for describing scientific names, while Library of Congress Subject Headings (LCSH) was best for describing topical terms. The two groups applied 45 topical terms to the bird data set and 49 topical terms to the hunting data set. Term overlap, meaning the same terms were applied by both groups, was close to 25% for each data set (27% for the bird data set and 24% for the hunting data set). Unique terms, those terms applied by either group were more widely dispersed. CONCLUSIONS: While there were similarities between the two groups, it is the differences that were the most apparent. Based on this research it is recommended that general repositories use metadata created by information professionals, while domain specific repositories use metadata created by scientists

    Das Digitale "GedÀchtnis der Menschheit" : eine Untersuchung dokumentarischer Praktiken im Zeitalter der digitalen Technologie

    Get PDF
    This research is a study of the UNESCO “Memory of the World” Programme established with the purpose to increase awareness of the existence and relevance of documentary heritage, and to achieve its universal and permanent accessibility. In this context, digital technology is increasingly used to provide access to documentary heritage but this activity also leads to a series of changes in how documents are understood and handled. Starting from the observation that the conceptual and practical changes triggered by digital technology in the “Memory of the World” do not seem to accurately reflect its stated philosophy, this research pursues the aim to critically analyze the possibilities and limits it offers. This analysis is facilitated by a conceptual framework anchored in the medium theory of Harold Innis and his concepts of medium, bias, space and time, and balance, which serve as analytical lenses to closely study selected aspects of digital technology and their influence. Despite popular beliefs that digital technology is most suitable for universal access, the findings of this present research lead to the observation that this cannot really be the case, and it reveals that an over-emphasis on the technical possibilities of digital access is not supportive of the overall purpose of the “Memory of the World”, leading to the narrowing down of its potential relevance. At first glance, this may suggest not recommending at all the use of digital technology. However, acknowledging that each medium has both limits and possibilities, instead of rejecting digital technology the study searches for solutions that may assist with integrating it in the “Memory of the World” in accordance with its overall purpose and philosophy. To this end, three recommendations are elaborated, the same conceptual framework that revealed the limits of digital technology being applied to construct on their possibilities. In order to motivate why following the recommendations of this analysis would be necessary, the study concludes by shifting attention from the relevance of digital technology in the “Memory of the World” Programme to the relevance of the Programme in a world changed by digital technology.Diese Forschung befasst sich mit dem UNESCO-Programm „Memory of the World“, das eingefĂŒhrt wurde, um Achtsamkeit ĂŒber die Existenz und Bedeutung von Dokumentenerbe zu schaffen, und um ihre universale und permanente ZugĂ€nglichkeit zu erreichen. In diesem Zusammenhang benutzt man zunehmend die digitale Technologie, um Zugang zu Dokumentenerbe zu ermöglichen, aber dies fĂŒhrt auch zu einer Reihe von VerĂ€nderungen wie Dokumente verstanden und behandelt werden. Angefangen von der Beobachtung, dass die konzeptionellen und praktischen VerĂ€nderungen, die durch die digitale Technologie im Memory of the World Programm ausgelöst werden, scheinbar nicht genau die festgelegte Philosophie reflektieren, verfolgt diese Forschung den Zweck, deren Möglichkeiten und Grenzen kritisch zu analysieren. Diese Analyse wird von einem konzeptionellen Rahmen gestĂŒtzt, welcher in der Medium Theorie von Harold Innis und seinen Konzepten ĂŒber das Medium, Bias, Space and Time, und Balance verankert ist. Diese Theorie dient als analytisches Objektiv, um ausgesuchte Aspekte der digitalen Technologie und ihren Einfluss nĂ€her zu untersuchen. Entgegen der allgemeinen Meinung, dass die digitale Technologie am besten geeignet ist fĂŒr den universellen Zugang, fĂŒhren die Resultate dieser aktuellen Forschung zu der Beobachtung, dass dieses nicht wirklich der Fall sein kann, und sie beweisen, das eine Überbewertung der technischen Möglichkeiten des digitalen Zugangs nicht hilfreich fĂŒr die allumfassenden Absichten des Memory of the World Programms sind, und zu einer EinschrĂ€nkung seiner potenziellen Bedeutung fĂŒhrt. Auf den ersten Blick könnte dieses heißen, den Gebrauch der digitalen Technologie ĂŒberhaupt nicht zu empfehlen. Allerdings anerkennend, dass jedes Medium sowohl Grenzen wie auch Möglichkeiten hat, lehnt diese Forschung die digitale Technologie nicht ab, sondern sucht nach Lösungen die helfen könnten, diese in das Memory of the World Programm zu integrieren, und zwar im Einklang mit den allumfassenden Absichten und ihrer Philosophie. Um zu motivieren warum es notwendig wĂ€re den Empfehlungen dieser Analyse zu folgen, beendet diese Forschung ihre Schlussfolgerung durch das Verlagern der Aufmerksamkeit von der Bedeutung der digitalen Technologie im Memory of the World Programm, zu der Bedeutung des Programms in einer durch digitale Technologie verĂ€nderten Welt

    The New Hampshire, Vol. 46, No. 21 (Nov. 1, 1956)

    Get PDF
    An independent student produced newspaper from the University of New Hampshire

    On a notion of abduction and relevance for first-order logic clause sets

    Get PDF
    I propose techniques to help with explaining entailment and non-entailment in first-order logic respectively relying on deductive and abductive reasoning. First, given an unsatisfiable clause set, one could ask which clauses are necessary for any possible deduction (\emph{syntactically relevant}), usable for some deduction (\emph{syntactically semi-relevant}), or unusable (\emph{syntactically irrelevant}). I propose a first-order formalization of this notion and demonstrate a lifting of this notion to the explanation of an entailment w.r.t some axiom set defined in some description logic fragments. Moreover, it is accompanied by a semantic characterization via \emph{conflict literals} (contradictory simple facts). From an unsatisfiable clause set, a pair of conflict literals are always deducible. A \emph{relevant} clause is necessary to derive any conflict literal, a \emph{semi-relevant} clause is necessary to derive some conflict literal, and an \emph{irrelevant} clause is not useful in deriving any conflict literals. It helps provide a picture of why an explanation holds beyond what one can get from the predominant notion of a minimal unsatisfiable set. The need to test if a clause is (syntactically) semi-relevant leads to a generalization of a well-known resolution strategy: resolution equipped with the set-of-support strategy is refutationally complete on a clause set NN and SOS MM if and only if there is a resolution refutation from NâˆȘMN\cup M using a clause in MM. This result non-trivially improves the original formulation. Second, abductive reasoning helps find extensions of a knowledge base to obtain an entailment of some missing consequence (called observation). Not only that it is useful to repair incomplete knowledge bases but also to explain a possibly unexpected observation. I particularly focus on TBox abduction in \EL description logic (still first-order logic fragment via some model-preserving translation scheme) which is rather lightweight but prevalent in practice. The solution space can be huge or even infinite. So, different kinds of minimality notions can help sort the chaff from the grain. I argue that existing ones are insufficient, and introduce \emph{connection minimality}. This criterion offers an interpretation of Occam's razor in which hypotheses are accepted only when they help acquire the entailment without arbitrarily using axioms unrelated to the problem at hand. In addition, I provide a first-order technique to compute the connection-minimal hypotheses in a sound and complete way. The key technique relies on prime implicates. While the negation of a single prime implicate can already serve as a first-order hypothesis, a connection-minimal hypothesis which follows \EL syntactic restrictions (a set of simple concept inclusions) would require a combination of them. Termination by bounding the term depth in the prime implicates is provable by only looking into the ones that are also subset-minimal. I also present an evaluation on ontologies from the medical domain by implementing a prototype with SPASS as a prime implicate generation engine.Ich schlage Techniken vor, die bei der ErklĂ€rung von Folgerung und Nichtfolgerung in der Logik erster Ordnung helfen, die sich jeweils auf deduktives und abduktives Denken stĂŒtzen. Erstens könnte man bei einer gegebenen unerfĂŒllbaren Klauselmenge fragen, welche Klauseln fĂŒr eine mögliche Deduktion notwendig (\emph{syntaktisch relevant}), fĂŒr eine Deduktion verwendbar (\emph{syntaktisch semi-relevant}) oder unbrauchbar (\emph{syntaktisch irrelevant}). Ich schlage eine Formalisierung erster Ordnung dieses Begriffs vor und demonstriere eine Anhebung dieses Begriffs auf die ErklĂ€rung einer Folgerung bezĂŒglich einer Reihe von Axiomen, die in einigen Beschreibungslogikfragmenten definiert sind. Außerdem wird sie von einer semantischen Charakterisierung durch \emph{Konfliktliteral} (widersprĂŒchliche einfache Fakten) begleitet. Aus einer unerfĂŒllbaren Klauselmenge ist immer ein Konfliktliteralpaar ableitbar. Eine \emph{relevant}-Klausel ist notwendig, um ein Konfliktliteral abzuleiten, eine \emph{semi-relevant}-Klausel ist notwendig, um ein Konfliktliteral zu generieren, und eine \emph{irrelevant}-Klausel ist nicht nĂŒtzlich, um Konfliktliterale zu generieren. Es hilft, ein Bild davon zu vermitteln, warum eine ErklĂ€rung ĂŒber das hinausgeht, was man aus der vorherrschenden Vorstellung einer minimalen unerfĂŒllbaren Menge erhalten kann. Die Notwendigkeit zu testen, ob eine Klausel (syntaktisch) semi-relevant ist, fĂŒhrt zu einer Verallgemeinerung einer bekannten Resolutionsstrategie: Die mit der Set-of-Support-Strategie ausgestattete Resolution ist auf einer Klauselmenge NN und SOS MM widerlegungsvollstĂ€ndig, genau dann wenn es eine Auflösungswiderlegung von NâˆȘMN\cup M unter Verwendung einer Klausel in MM gibt. Dieses Ergebnis verbessert die ursprĂŒngliche Formulierung nicht trivial. Zweitens hilft abduktives Denken dabei, Erweiterungen einer Wissensbasis zu finden, um eine implikantion einer fehlenden Konsequenz (Beobachtung genannt) zu erhalten. Es ist nicht nur nĂŒtzlich, unvollstĂ€ndige Wissensbasen zu reparieren, sondern auch, um eine möglicherweise unerwartete Beobachtung zu erklĂ€ren. Ich konzentriere mich besonders auf die TBox-Abduktion in dem leichten, aber praktisch vorherrschenden Fragment der Beschreibungslogik \EL, das tatsĂ€chlich ein Logikfragment erster Ordnung ist (mittels eines modellerhaltenden Übersetzungsschemas). Der Lösungsraum kann riesig oder sogar unendlich sein. So können verschiedene Arten von MinimalitĂ€tsvorstellungen helfen, die Spreu vom Weizen zu trennen. Ich behaupte, dass die bestehenden unzureichend sind, und fĂŒhre \emph{VerbindungsminimalitĂ€t} ein. Dieses Kriterium bietet eine Interpretation von Ockhams Rasiermesser, bei der Hypothesen nur dann akzeptiert werden, wenn sie helfen, die Konsequenz zu erlangen, ohne willkĂŒrliche Axiome zu verwenden, die nichts mit dem vorliegenden Problem zu tun haben. Außerdem stelle ich eine Technik in Logik erster Ordnung zur Berechnung der verbindungsminimalen Hypothesen in zur VerfĂŒgung korrekte und vollstĂ€ndige Weise. Die SchlĂŒsseltechnik beruht auf Primimplikanten. WĂ€hrend die Negation eines einzelnen Primimplikant bereits als Hypothese in Logik erster Ordnung dienen kann, wĂŒrde eine Hypothese des Verbindungsminimums, die den syntaktischen EinschrĂ€nkungen von \EL folgt (einer Menge einfacher Konzeptinklusionen), eine Kombination dieser beiden erfordern. Die Terminierung durch Begrenzung der Termtiefe in den Primimplikanten ist beweisbar, indem nur diejenigen betrachtet werden, die auch teilmengenminimal sind. Außerdem stelle ich eine Auswertung zu Ontologien aus der Medizin vor, DomĂ€ne durch die Implementierung eines Prototyps mit SPASS als Primimplikant-Generierungs-Engine
    • 

    corecore