169,913 research outputs found

    From the invalidity of a General Classification Theory to a new organization of knowledge for the millennium to come

    Get PDF
    Proceedings der 10. Tagung der Deutschen Sektion der Internationalen Gesellschaft fĂŒr Wissensorganisation. Wien, 3-5 Juli 2006The idea of organizing knowledge and the determinism in classifĂ­cation structures implicitly involve certain limits which are translated into a General Theory on the ClassifĂ­cation of Knowledge, given that classifĂ­cation responds to specific parameters and structures more than to a theoretical concept. The classifĂ­cation of things is a refiection of their classifĂ­cation by man, and this is what determines classifĂ­cation structures. The classifĂ­cation and organization of knowledge are presented to us as an artificial construct or as a useful fiction elaborated by man. Positivist knowledge reached its peak in the 20* century when science classifications and implemented classifĂ­cation systems based on the latter were to be gestated and Consolidated. Pragmatism was to serve as the epistemological and theoretical basis for science and its classifĂ­cation. If the classifĂ­cation of the sciences has given rise to clastification systems, the organisation and representation of knowledge has to currendy give rise to the context of the globalisation of electronic information in the hypertextual organisational form of electronic information where, if in information the mĂ©dium ivas the message, in organisation the mĂ©dium is the structure. The virtual reality of electronic information delves even deeper into it; the process is completed as the subject attempts to look for information. This information market needs standards of an international nature for documents and data. This body of information organization will be characterized by its dynamic nature. If formal and material structures change our concept of knowledge and the way it is structured, then this organization will undergo dynamic change along with the material and formal structures of the real world. The semantic web is a qualitative leap which can be glimpsed on tiie new knowledge horizon; the latter would be shaped with the full integration of contents and data, the language itself would include data and its rules of reason or representation system. The new organisation of knowledge points to a totally nCw conception; post-modern epistemology has yet to be articulated. In the 21 st century, the organization of electronic information is presenting a novel hypertextual, non-linear architecture that will lead to a new change in the paradigm for organization of knowledge for the mĂŒlennium to come.Publicad

    PRESY: A Context Based Query Reformulation Tool for Information Retrieval on the Web

    Full text link
    Problem Statement: The huge number of information on the web as well as the growth of new inexperienced users creates new challenges for information retrieval. It has become increasingly difficult for these users to find relevant documents that satisfy their individual needs. Certainly the current search engines (such as Google, Bing and Yahoo) offer an efficient way to browse the web content. However, the result quality is highly based on uses queries which need to be more precise to find relevant documents. This task still complicated for the majority of inept users who cannot express their needs with significant words in the query. For that reason, we believe that a reformulation of the initial user's query can be a good alternative to improve the information selectivity. This study proposes a novel approach and presents a prototype system called PRESY (Profile-based REformulation SYstem) for information retrieval on the web. Approach: It uses an incremental approach to categorize users by constructing a contextual base. The latter is composed of two types of context (static and dynamic) obtained using the users' profiles. The architecture proposed was implemented using .Net environment to perform queries reformulating tests. Results: The experiments gives at the end of this article show that the precision of the returned content is effectively improved. The tests were performed with the most popular searching engine (i.e. Google, Bind and Yahoo) selected in particular for their high selectivity. Among the given results, we found that query reformulation improve the first three results by 10.7% and 11.7% of the next seven returned elements. So as we can see the reformulation of users' initial queries improves the pertinence of returned content.Comment: 8 page

    Towards memory supporting personal information management tools

    Get PDF
    In this article we discuss re-retrieving personal information objects and relate the task to recovering from lapse(s) in memory. We propose that fundamentally it is lapses in memory that impede users from successfully re-finding the information they need. Our hypothesis is that by learning more about memory lapses in non-computing contexts and how people cope and recover from these lapses, we can better inform the design of PIM tools and improve the user's ability to re-access and re-use objects. We describe a diary study that investigates the everyday memory problems of 25 people from a wide range of backgrounds. Based on the findings, we present a series of principles that we hypothesize will improve the design of personal information management tools. This hypothesis is validated by an evaluation of a tool for managing personal photographs, which was designed with respect to our findings. The evaluation suggests that users' performance when re-finding objects can be improved by building personal information management tools to support characteristics of human memory

    Software that Learns from its Own Failures

    Full text link
    All non-trivial software systems suffer from unanticipated production failures. However, those systems are passive with respect to failures and do not take advantage of them in order to improve their future behavior: they simply wait for them to happen and trigger hard-coded failure recovery strategies. Instead, I propose a new paradigm in which software systems learn from their own failures. By using an advanced monitoring system they have a constant awareness of their own state and health. They are designed in order to automatically explore alternative recovery strategies inferred from past successful and failed executions. Their recovery capabilities are assessed by self-injection of controlled failures; this process produces knowledge in prevision of future unanticipated failures
    • 

    corecore