105 research outputs found

    Design reuse research : a computational perspective

    Get PDF
    This paper gives an overview of some computer based systems that focus on supporting engineering design reuse. Design reuse is considered here to reflect the utilisation of any knowledge gained from a design activity and not just past designs of artefacts. A design reuse process model, containing three main processes and six knowledge components, is used as a basis to identify the main areas of contribution from the systems. From this it can be concluded that while reuse libraries and design by reuse has received most attention, design for reuse, domain exploration and five of the other knowledge components lack research effort

    Model Reka Bentuk Konseptual Operasian Storan Data Bagi Aplikasi Kepintaran Perniagaan

    Get PDF
    The development of business intelligence (BI) applications, involving of data sources, Data Warehouse (DW), Data Mart (DM) and Operational Data Store (ODS), imposes a major challenge to BI developers. This is mainly due to the lack of established models, guidelines and techniques in the development process as compared to system development in the discipline of software engineering. Furthermore, the present BI applications emphasize on the development of strategic information in contrast to operational and tactical. Therefore, the main aim of this study is to propose a conceptual design model for BI applications using ODS (CoDMODS). Through expert validation, the proposed conceptual design model that was developed by means of design science research approach, was found to satisfy nine quality model dimensions, which are, easy to understand, covers clear steps, is relevant and timeless, demonstrates flexibility, scalability, accuracy, completeness and consistency. Additionally, the two prototypes that were developed based on CoDMODS for water supply service (iUBIS) and telecommunication maintenance (iPMS) recorded a high usability average min value of 5.912 using Computer System Usability Questionnaire (CSUQ) instrument. The outcomes of this study, particularly the proposed model, contribute to the analysis and design method for the development of the operational and tactical information in BI applications. The model can be referred as guidelines by BI developers. Furthermore, the prototypes that were developed in the case studies can assist the organizations in using quality information for business operations

    Interim research assessment 2003-2005 - Computer Science

    Get PDF
    This report primarily serves as a source of information for the 2007 Interim Research Assessment Committee for Computer Science at the three technical universities in the Netherlands. The report also provides information for others interested in our research activities

    Contribution à la définition de modÚles de recherche d'information flexibles basés sur les CP-Nets

    Get PDF
    This thesis addresses two main problems in IR: automatic query weighting and document semantic indexing. Our global contribution consists on the definition of a theoretical flexible information retrieval (IR) model based on CP-Nets. The CP-Net formalism is used for the graphical representation of flexible queries expressing qualitative preferences and for automatic weighting of such queries. Furthermore, the CP-Net formalism is used as an indexing language in order to represent document representative concepts and related relations in a roughly compact way. Concepts are identified by projection on WordNet. Concept relations are discovered by means of semantic association rules. A query evaluation mechanism based on CP-Nets graph similarity is also proposed.Ce travail de thĂšse adresse deux principaux problĂšmes en recherche d'information : (1) la formalisation automatique des prĂ©fĂ©rences utilisateur, (ou la pondĂ©ration automatique de requĂȘtes) et (2) l'indexation sĂ©mantique. Dans notre premiĂšre contribution, nous proposons une approche de recherche d'information (RI) flexible fondĂ©e sur l'utilisation des CP-Nets (Conditional Preferences Networks). Le formalisme CP-Net est utilisĂ© d'une part, pour la reprĂ©sentation graphique de requĂȘtes flexibles exprimant des prĂ©fĂ©rences qualitatives et d'autre part pour l'Ă©valuation flexible de la pertinence des documents. Pour l'utilisateur, l'expression de prĂ©fĂ©rences qualitatives est plus simple et plus intuitive que la formulation de poids numĂ©riques les quantifiant. Cependant, un systĂšme automatisĂ© raisonnerait plus simplement sur des poids ordinaux. Nous proposons alors une approche de pondĂ©ration automatique des requĂȘtes par quantification des CP-Nets correspondants par des valeurs d'utilitĂ©. Cette quantification conduit Ă  un UCP-Net qui correspond Ă  une requĂȘte boolĂ©enne pondĂ©rĂ©e. Une utilisation des CP-Nets est Ă©galement proposĂ©e pour la reprĂ©sentation des documents dans la perspective d'une Ă©valuation flexible des requĂȘtes ainsi pondĂ©reĂ©s. Dans notre seconde contribution, nous proposons une approche d'indexation conceptuelle basĂ©e sur les CP-Nets. Nous proposons d'utiliser le formalisme CP-Net comme langage d'indexation afin de reprĂ©senter les concepts et les relations conditionnelles entre eux d'une maniĂšre relativement compacte. Les noeuds du CP-Net sont les concepts reprĂ©sentatifs du contenu du document et les relations entre ces noeuds expriment les associations conditionnelles qui les lient. Notre contribution porte sur un double aspect : d'une part, nous proposons une approche d'extraction des concepts en utilisant WordNet. Les concepts rĂ©sultants forment les noeuds du CP-Net. D'autre part, nous proposons d'Ă©tendre et d'utiliser la technique de rĂšgles d'association afin de dĂ©couvrir les relations conditionnelles entre les concepts noeuds du CP-Nets. Nous proposons enfin un mĂ©canisme d'Ă©valuation des requĂȘtes basĂ© sur l'appariement de graphes (les CP-Nets document et requĂȘte en l'occurrence)

    Web Archive Services Framework for Tighter Integration Between the Past and Present Web

    Get PDF
    Web archives have contained the cultural history of the web for many years, but they still have a limited capability for access. Most of the web archiving research has focused on crawling and preservation activities, with little focus on the delivery methods. The current access methods are tightly coupled with web archive infrastructure, hard to replicate or integrate with other web archives, and do not cover all the users\u27 needs. In this dissertation, we focus on the access methods for archived web data to enable users, third-party developers, researchers, and others to gain knowledge from the web archives. We build ArcSys, a new service framework that extracts, preserves, and exposes APIs for the web archive corpus. The dissertation introduces a novel categorization technique to divide the archived corpus into four levels. For each level, we will propose suitable services and APIs that enable both users and third-party developers to build new interfaces. The first level is the content level that extracts the content from the archived web data. We develop ArcContent to expose the web archive content processed through various filters. The second level is the metadata level; we extract the metadata from the archived web data and make it available to users. We implement two services, ArcLink for temporal web graph and ArcThumb for optimizing the thumbnail creation in the web archives. The third level is the URI level that focuses on using the URI HTTP redirection status to enhance the user query. Finally, the highest level in the web archiving service framework pyramid is the archive level. In this level, we define the web archive by the characteristics of its corpus and building Web Archive Profiles. The profiles are used by the Memento Aggregator for query optimization

    FICCS; A Fact Integrity Constraint Checking System

    Get PDF

    Exploring the reuse of past search results in information retrieval

    Get PDF
    Les recherches passĂ©es constituent pourtant une source d'information utile pour les nouveaux utilisateurs (nouvelles requĂȘtes). En raison de l'absence de collections ad-hoc de RI, Ă  ce jour il y a un faible intĂ©rĂȘt de la communautĂ© RI autour de l'utilisation des recherches passĂ©es. En effet, la plupart des collections de RI existantes sont composĂ©es de requĂȘtes indĂ©pendantes. Ces collections ne sont pas appropriĂ©es pour Ă©valuer les approches fondĂ©es sur les requĂȘtes passĂ©es parce qu'elles ne comportent pas de requĂȘtes similaires ou qu'elles ne fournissent pas de jugements de pertinence. Par consĂ©quent, il n'est pas facile d'Ă©valuer ce type d'approches. En outre, l'Ă©laboration de ces collections est difficile en raison du coĂ»t et du temps Ă©levĂ©s nĂ©cessaires. Une alternative consiste Ă  simuler les collections. Par ailleurs, les documents pertinents de requĂȘtes passĂ©es similaires peuvent ĂȘtre utilisĂ©es pour rĂ©pondre Ă  une nouvelle requĂȘte. De nombreuses contributions ont Ă©tĂ© proposĂ©es portant sur l'utilisation de techniques probabilistes pour amĂ©liorer les rĂ©sultats de recherche. Des solutions simples Ă  mettre en Ɠuvre pour la rĂ©utilisation de rĂ©sultats de recherches peuvent ĂȘtre proposĂ©es au travers d'algorithmes probabilistes. De plus, ce principe peut Ă©galement bĂ©nĂ©ficier d'un clustering des recherches antĂ©rieures selon leurs similaritĂ©s. Ainsi, dans cette thĂšse un cadre pour simuler des collections pour des approches basĂ©es sur les rĂ©sultats de recherche passĂ©es est mis en Ɠuvre et Ă©valuĂ©. Quatre algorithmes probabilistes pour la rĂ©utilisation des rĂ©sultats de recherches passĂ©es sont ensuite proposĂ©s et Ă©valuĂ©s. Enfin, une nouvelle mesure dans un contexte de clustering est proposĂ©e.Past searches provide a useful source of information for new users (new queries). Due to the lack of ad-hoc IR collections, to this date there is a weak interest of the IR community on the use of past search results. Indeed, most of the existing IR collections are composed of independent queries. These collections are not appropriate to evaluate approaches rooted in past queries because they do not gather similar queries due to the lack of relevance judgments. Therefore, there is no easy way to evaluate the convenience of these approaches. In addition, elaborating such collections is difficult due to the cost and time needed. Thus a feasible alternative is to simulate such collections. Besides, relevant documents from similar past queries could be used to answer the new query. This principle could benefit from clustering of past searches according to their similarities. Thus, in this thesis a framework to simulate ad-hoc approaches based on past search results is implemented and evaluated. Four randomized algorithms to improve precision are proposed and evaluated, finally a new measure in the clustering context is proposed
    • 

    corecore