1,605 research outputs found
A bioinformatics knowledge discovery in text application for grid computing
<p>Abstract</p> <p>Background</p> <p>A fundamental activity in biomedical research is Knowledge Discovery which has the ability to search through large amounts of biomedical information such as documents and data. High performance computational infrastructures, such as Grid technologies, are emerging as a possible infrastructure to tackle the intensive use of Information and Communication resources in life science. The goal of this work was to develop a software middleware solution in order to exploit the many knowledge discovery applications on scalable and distributed computing systems to achieve intensive use of ICT resources.</p> <p>Methods</p> <p>The development of a grid application for Knowledge Discovery in Text using a middleware solution based methodology is presented. The system must be able to: perform a user application model, process the jobs with the aim of creating many parallel jobs to distribute on the computational nodes. Finally, the system must be aware of the computational resources available, their status and must be able to monitor the execution of parallel jobs. These operative requirements lead to design a middleware to be specialized using user application modules. It included a graphical user interface in order to access to a node search system, a load balancing system and a transfer optimizer to reduce communication costs.</p> <p>Results</p> <p>A middleware solution prototype and the performance evaluation of it in terms of the speed-up factor is shown. It was written in JAVA on Globus Toolkit 4 to build the grid infrastructure based on GNU/Linux computer grid nodes. A test was carried out and the results are shown for the named entity recognition search of symptoms and pathologies. The search was applied to a collection of 5,000 scientific documents taken from PubMed.</p> <p>Conclusion</p> <p>In this paper we discuss the development of a grid application based on a middleware solution. It has been tested on a knowledge discovery in text process to extract new and useful information about symptoms and pathologies from a large collection of unstructured scientific documents. As an example a computation of Knowledge Discovery in Database was applied on the output produced by the KDT user module to extract new knowledge about symptom and pathology bio-entities.</p
Conceptual roles of data in program: analyses and applications
Program comprehension is the prerequisite for many software evolution and maintenance tasks. Currently, the research falls short in addressing how to build tools that can use domain-specific knowledge to provide powerful capabilities for extracting valuable information for facilitating program comprehension. Such capabilities are critical for working with large and complex program where program comprehension often is not possible without the help of domain-specific knowledge.;Our research advances the state-of-art in program analysis techniques based on domain-specific knowledge. The program artifacts including variables and methods are carriers of domain concepts that provide the key to understand programs. Our program analysis is directed by domain knowledge stored as domain-specific rules. Our analysis is iterative and interactive. It is based on flexible inference rules and inter-exchangeable and extensible information storage. We designed and developed a comprehensive software environment SeeCORE based on our knowledge-centric analysis methodology. The SeeCORE tool provides multiple views and abstractions to assist in understanding complex programs. The case studies demonstrate the effectiveness of our method. We demonstrate the flexibility of our approach by analyzing two legacy programs in distinct domains
Sequential Recommendation with Link-Prediction on Graphs Meta-Learning
Graduate School of Artificial IntelligenceIn this paper, we propose a novel framework called Sequential Graph Meta-learning (SGM) to address the problem of sequential recommendation, which involves predicting the next item based on a user???s historical behavior. SGM introduces a graph-based representation that captures the relationships between users and items, leveraging them as nodes and their interactions as edges. By extracting meaningful node embeddings, our model effectively encodes the complex relationships within the graph. Furthermore, we utilize subgraphs that represent user-item interactions with meta-learning, enabling the model to adapt and reflect as time change. Specifically, our approach focuses on link prediction, aiming to predict whether a user will interact with a specific item in the future. Through extensive experiments, we demonstrate that our SGM framework outperforms previous models in most scenarios by significant margins. This highlights the effectiveness of our proposed approach in addressing the challenges of sequential recommendation and enhancing recommendation accuracy.clos
Ontology-based knowledge management for technology intensive industries
EThOS - Electronic Theses Online ServiceGBUnited Kingdo
An Intelligent Online Shopping Guide Based On Product Review Mining
This position paper describes an on-going work on a novel recommendation framework for assisting online shoppers in choosing the most desired products, in accordance with requirements input in natural language. Existing feature-based Shopping Guidance Systems fail when the customer lacks domain expertise. This framework enables the customer to use natural language in the query text to retrieve preferred products interactively. In addition, it is intelligent enough to allow a customer to use objective and subjective terms when querying, or even the purpose of purchase, to screen out the expected products
The power of quiet: Re-making affective amateur and professional textiles agencies
This article is part of a special issue on textiles and intersecting identities. The article was developed from a paper given at the Association of Fashion & Textiles Courses (FTC) Conference, Futurescan 3: Intersecting Identities (Glasgow School of Art, November 2015.This article advocates an enlarged understanding of the benefits of manual creativity for critical thinking and affective making, which blurs the boundaries, or at least works in the spaces between or beyond amateur and professional craft practices and identities. It presents findings from the Arts and Humanities Research Council (AHRC) funded project: Co-Producing CARE: Community Asset-based Research & Enterprise (https://cocreatingcare.wordpress.com). CARE worked with community groups (composed of amateur and professional textile makers) in a variety of amateur contexts: the kitchen table, the community cafe, the library, for instance, to explore how critical creative making might serve as a means to co-produce community agency, assets and abilities. The research proposes that through ‘acts of small citizenship’ creative making can be powerfully, if quietly, activist (Orton Johnson 2014; Hackney 2013a). Unlike more familiar crafts activism, such ‘acts’ are not limited to overtly political and public manifestations of social action, but rather concern the micro-politics of the individual, the grass roots community and the social everyday. The culturally marginal, yet accessible nature of amateur crafts becomes a source of strength and potential as we explore its active, dissenting and paradoxically discontented aspects alongside more frequently articulated dimensions of acceptance, consensus and satisfaction. Informed by Richard Sennett’s (2012) work on cooperation, Matt Ratto and Megan Bolar (2014) on DIY citizenship and critical making, Ranciere’s (2004) theory of the ‘distribution of the sensible’, and theories of embodied and enacted knowledge, the authors interpret findings from selected CARE-related case studies to explicate various ways in which ‘making’ can make a difference by: providing a safe space for disagreement, reflection, resolution, collaboration, active listening, questioning and critical thinking, for instance, and offer quiet, tenacious and life-enhancing forms of resistance and revision to hegemonic versions of culture and subjectivity
A usability approach to improving the user experience in web directories
PhDWeb directories are hierarchically organised website collections that offer users subjectbased
access to the Web. They played a significant part in navigating the Web in the past
but their role has been weakened in recent years due to their cumbersome expanding
collections. This thesis presents a unified framework combining the advantages of
personalisation and redefined directory search for improving the usability of Web
directories.
The thesis begins with an examination of classification schemes that identifies the
rigidity of hierarchical classifications and their suitability for Web directories in contrast
to faceted classifications. This leads on to an Ontological Sketch Modelling (OSM) case
study which identifies the misfits affecting user navigation in Web directories from
known rigidity issues. The thesis continues with a review of personalisation techniques
and a discussion of the user search model of Web directories following the suggested
directions of improvement from the case study. A proposed user-centred framework to
improve the usability of Web directories which consists of an individual content-based
personalisation model and a redefined search model is then implemented as D-Persona
and D-Search respectively. The remainder of the thesis is concerned with a usability test
of D-Persona and D-Search aimed at discovering the efficiency, effectiveness and user
satisfaction of the solution. This involves an experimental design, test results and
discussions for the comparative user study.
This thesis extracts a formal definition of the rigidity of hierarchies from their
characteristics and justifies why hierarchies are still better suited than facets in
organising Web directories. Second, it identifies misfits causing poor usability in Web
directories based on the discovered rigidity of hierarchies. Third, it proposes a solution
to tackle the misfits and improve the usability of Web directories which has been
experimentally proved to be successful
A Usability Approach to Improving the User Experience in Web Directories
Submitted for the degree of Doctor of Philosophy, Queen Mary, University of Londo
Absence of Information-Giving in Information Behavior Models
In addition to being utilized in areas such as Information Storage and Retrieval and Knowledge Management, investigating human information behavior as an independent field of knowledge has its issues. The concept of info-behavior somehow involves a reciprocal process: the info-giver and the info-receiver/seeker. On the other hand, in a logical analysis of information behavior, the elements required in this process are information and the source of information (i.e., the info-giver). Nevertheless, most of the discussions and the papers found in the literature on human information behavior (perhaps because of being interested in the applications mentioned above) are focused on info-seeking. And for any reason, little attention is paid to the role of the info-giver as the primary side in information behavior. However, the author's main idea in this paper is that the info-giver is central to realizing this process. The present article, having a glance at some of the most popular models on information behavior and the place of the info-seeker in the process, points to the absence of the info-giver in the mentioned models. Then, by providing some exemplary cases of information behavior initiated by the info-seeker/receiver in the absence, the paper emphasizes the information giver's central role in realizing the information behavior process
Using visualization, variable selection and feature extraction to learn from industrial data
Although the engineers of industry have access to process data, they seldom use advanced statistical tools to solve process control problems. Why this reluctance? I believe that the reason is in the history of the development of statistical tools, which were developed in the era of rigorous mathematical modelling, manual computation and small data sets. This created sophisticated tools. The engineers do not understand the requirements of these algorithms related, for example, to pre-processing of data. If algorithms are fed with unsuitable data, or parameterized poorly, they produce unreliable results, which may lead an engineer to turn down statistical analysis in general.
This thesis looks for algorithms that probably do not impress the champions of statistics, but serve process engineers. This thesis advocates three properties in an algorithm: supervised operation, robustness and understandability. Supervised operation allows and requires the user to explicate the goal of the analysis, which allows the algorithm to discover results that are relevant to the user. Robust algorithms allow engineers to analyse raw process data collected from the automation system of the plant. The third aspect is understandability: the user must understand how to parameterize the model, what is the principle of the algorithm, and know how to interpret the results.
The above criteria are justified with the theories of human learning. The basis is the theory of constructivism, which defines learning as construction of mental models. Then I discuss the theories of organisational learning, which show how mental models influence the behaviour of groups of persons. The next level discusses statistical methodologies of data analysis, and binds them to the theories of organisational learning. The last level discusses individual statistical algorithms, and introduces the methodology and the algorithms proposed by this thesis. This methodology uses three types of algorithms: visualization, variable selection and feature extraction. The goal of the proposed methodology is to reliably and understandably provide the user with information that is related to a problem he has defined interesting.
The above methodology is illustrated by an analysis of an industrial case: the concentrator of the Hitura mine. This case illustrates how to define the problem with off-line laboratory data, and how to search the on-line data for solutions. A major advantage of algorithmic study of data is efficiency: the manual approach reported in the early took approximately six man months; the automated approach of this thesis created comparable results in few weeks.reviewe
- …