8 research outputs found
Crowd-Sourcing Fuzzy and Faceted Classification for Concept Search
Searching for concepts in science and technology is often a difficult task.
To facilitate concept search, different types of human-generated metadata have
been created to define the content of scientific and technical disclosures.
Classification schemes such as the International Patent Classification (IPC)
and MEDLINE's MeSH are structured and controlled, but require trained experts
and central management to restrict ambiguity (Mork, 2013). While unstructured
tags of folksonomies can be processed to produce a degree of structure
(Kalendar, 2010; Karampinas, 2012; Sarasua, 2012; Bragg, 2013) the freedom
enjoyed by the crowd typically results in less precision (Stock 2007).
Existing classification schemes suffer from inflexibility and ambiguity.
Since humans understand language, inference, implication, abstraction and hence
concepts better than computers, we propose to harness the collective wisdom of
the crowd. To do so, we propose a novel classification scheme that is
sufficiently intuitive for the crowd to use, yet powerful enough to facilitate
search by analogy, and flexible enough to deal with ambiguity. The system will
enhance existing classification information. Linking up with the semantic web
and computer intelligence, a Citizen Science effort (Good, 2013) would support
innovation by improving the quality of granted patents, reducing duplicitous
research, and stimulating problem-oriented solution design.
A prototype of our design is in preparation. A crowd-sourced fuzzy and
faceted classification scheme will allow for better concept search and improved
access to prior art in science and technology
Electronic blending in virtual microscopy
Virtual microscopy (VM) is a relatively new technology that transforms the computer into a microscope. In essence, VM allows for the scanning and transfer of glass slides from light microscopy technology to the digital environment of the computer. This transition is also a function of the change from print knowledge to electronic knowledge, or as Gregory Ulmer puts it, a shift ‘from literacy to electracy.’ Blended learning, of course, is capable of including a wide variety of educational protocols in its definition; it is also at the heart of electronically mediated forms of education. Since 2004, VM has been introduced into Dentistry, Medicine, Biomedical Science and Veterinary Science courses at the University of Queensland, a project aimed at consolidating VM techniques and technologies into their curricula. This paper uses some of the evaluative survey data collected from this embedding process to discuss the role blended learning plays in electronic styles of learning, or ‘electracy’, before finally reflecting on the quantum world represented in VM imagery
TaxoFolk : a hybrid taxonomy–folksonomy classification for enhanced knowledge navigation
Accepted ManuscriptPublishe
Changing anatomies of Information Literacy at the postgraduate level: refinements of models and shifts in assessment
In this paper fundamental principles that might inform an approach to Information Literacy (IL) on the postgraduate level will be identified. Those are based on following premises:the aims of postgraduate/doctoral studies are different in comparison to earlier educational levels and face specific challenges due to the heterogeneity of student populationsIL frameworks have to acknowledge and address this challenge by adjusting to specific needs of postgraduate students who operate in new information realmsnew modes of assessment are needed as a result of revolutionary changes in information landscapes and patterns of generation and use of scientific informationTeaching students in the scientific method and culture has long been recognized as the major focus of postgraduate education, an important precondition for research practices is the adequate performance in the realm of information handling and information management, i.e., information literacy. IL on postgraduate levels has a strong focus on the universe of scientific information, which itself went through tremendous changes in the last decade, particularly as a result of the appearance of the Web 2.0 (e.g. Science 2.0, Research 2.0). Such profound changes suggest renewed conceptions and focal points of IL at the postgraduate level which will take into account the fluid nature of current information environments. After discussing changes in information landscapes brought about the Web 2.0 and examining transformed premises of scientific work within such environments, the authors will plea for re-conceptualizations of IL on the postgraduate level and propose new principles of IL frameworks and modes of assessment that will recognize this transformation
Social Software für das Wissensmanagement im Unternehmen
Der Einsatz von Software stellt eine der Optionen dar, den systematischen Umgang mit Wissen - der für Unternehmen elementaren Ressource - zu unterstützen. Neue Technologien ermöglichen Anwendungen für veränderte Formen globaler Zusammenarbeit, dazu gehören Wikis, Weblogs und Social Tagging. Durch einen umfassenden Abgleich ihrer Eigenschaften mit den Anforderungen, die das Wissensmanagement an unterstützende Anwendungen stellt, wird eine Aussage möglich inwiefern sich Social Software für den Einsatz im betrieblichen Wissensmanagement eignet.The use of software is one of the ways to support the systematic management of knowledge, the basic resource for businesses. New technologies are enabling applications for modified forms of global cooperation, these include wikis, blogs and social tagging. Through a comprehensive comparison of their properties with the demands - posed by knowledge management to supporting applications - a statement is possible whether social software is suitable for corporate knowledge management
O desafio da homogeneização normativa em instituições de memória: proposta de um modelo uniformizador e colaborativo
Doutoramento em Informação e Comunicação em Plataformas DigitaisA informação digitalizada e nado digital, fruto do avanço tecnológico
proporcionado pelas Tecnologias da Informação e Comunicação (TIC), bem
como da filosofia participativa da Web 2.0, conduziu à necessidade de reflexão
sobre a capacidade de os modelos atuais, para a organização e representação
da informação, de responder às necessidades info-comunicacionais assim como
o acesso à informação eletrónica pelos utilizadores em Instituições de Memória.
O presente trabalho de investigação tem como objetivo a conceção e avaliação
de um modelo genérico normativo e harmonizador para a organização e
representação da informação eletrónica, num sistema de informação para o uso
de utilizadores e profissionais da informação, no contexto atual colaborativo e
participativo.
A definição dos objetivos propostos teve por base o estudo e análise qualitativa
das normas adotadas pelas instituições de memória, para os registos de
autoridade, bibliográfico e formatos de representação. Após a concetualização,
foi desenvolvido e avaliado o protótipo, essencialmente, pela análise qualitativa
dos dados obtidos a partir de testes à recuperação da informação.
A experiência decorreu num ambiente laboratorial onde foram realizados testes,
entrevistas e inquéritos por questionário.
A análise cruzada dos resultados, obtida pela triangulação dos dados recolhidos
através das várias fontes, permitiu concluir que tanto os utilizadores como os
profissionais da informação consideraram muito interessante a integração da
harmonização normativa refletida nos vários módulos, a integração de
serviços/ferramentas comunicacionais e a utilização da componente
participativa/colaborativa da plataforma privilegiando a Wiki, seguida dos
Comentários, Tags, Forum de discussão e E-mail.The growth of digital information (born digital and digitalized), as a result of the
technological advances of ICT (Information and Communication
Technologies), raised the need for a reflection on the information models
adopted by the memory institutions such as Libraries, Archives and Museums
(LAM), and their ability to answer the information needs of their users.
This research work aims at designing and evaluating a generic model for the
organization and representation of electronic information in an information
system. This model is intended for users’ and information professionals’ use,
taking advantage of the current collaborative and participatory environment
context.
The conceptualization of the model design was based on the qualitative
analysis results of the authority records, bibliographic records and
representation formats standards adopted by memory institutions.
After design harmonization, a prototype was developed to test the ideas and
concepts underlying the model. The data was collected through retrieval
information tests, performed at the prototype, by users and information
professionals (in a total of thirty participants).
The experience took place in a laboratory context. The data collection was
carried out through the application of different data gathering techniques, such
as tests, interviews and questionnaire surveys.
The triangulation of cross-analysis results achieved from various sources
showed that both users and information professionals found the integration of
standard harmonization reflected in the various modules very interesting, as
well as the integration of services / communication tools and the use of a
participatory component / collaborative platform focusing on the Wiki, followed
by Comments, Tags, Discussion forums and E-mail
Beschreibung, Verwaltung und Ausführung von Arbeitsabläufen im autonomen Datenbank-Tuning
In den letzten Jahrzehnten wurde die Administration von IT-Systemen zunehmend aufwendiger und kostspieliger. Zur Gewährleistung einer hohen Verfügbarkeit und Performance dieser Systeme reichen eine kontinuierliche manuelle Administration und Optimierung der Systeme im laufenden Betrieb kaum noch aus. Initiativen wie das Autonomic Computing versuchen daher, die Administrationskomplexität neuer Systeme zu reduzieren, indem sie eine Automatisierung der komplexen Systemverwaltungs- und -konfigurationsaufgaben und ihre anschließende Übertragung an die Systeme selbst ermöglichen. Die vorliegende Arbeit verfolgt das Ziel, die Übertragbarkeit der Konzepte des Autonomic Computing auf das Datenbank-Tuning zu untersuchen und eine Infrastruktur zur Automatisierung typischer Datenbank-Tuning-Aufgaben unter Reduzierung menschlicher Interaktion zu konzipieren. Als eine der Grundvoraussetzungen für die Automatisierung der Datenbank-Tuning-Aufgaben wurden hier die Beschreibung und Modellierung des Tuning-Wissens identifiziert. Die vorgestellten Konzepte ermöglichen es den Administratoren daher, sowohl die Problemsituationen als auch die entsprechenden bewährten Tuning-Abläufe zu erfassen und im System zu hinterlegen. Mit Hilfe einer auf diesen Konzepten aufbauenden Architektur lassen sich IT-Systeme kontinuierlich überwachen und beim Feststellen eines problematischen Verhaltens entsprechende, vorab im System hinterlegte Tuning-Abläufe einleiten. Dabei können sowohl der Überwachungs- als auch der Tuning-Prozess in Abhängigkeit von der anliegenden Arbeitslast und unter der Einbeziehung von Metadaten bzw. nutzerdefinierten Tuning-Zielen beeinflusst werden. Zur Unterstützung einer kollaborativen Entwicklung und eines Austauschs von Tuning-Praktiken wird in dieser Arbeit weiterhin eine Community-Plattform konzipiert. Dabei spielen insbesondere Konzepte zur effizienten, feingranularen, semantikreichen Speicherung, Versionierung und Evolution von Tuning-Praktiken eine wichtige Rolle