18,604 research outputs found

    The applicability of Process Mining to determine and align process model descriptions

    Get PDF
    Within the HU University of Applied Sciences (HU) the department HU Services (HUS) has not got enough insight in their IT Service Management processes to align them to the new Information System that is implemented to support the service management function. The problem that rises from this is that it is not clear for the HU how the actual Incident Management process as facilitated by the application is actually executed. Subsequently it is not clear what adjustments have to be made to the process descriptions to have it resemble the process in the IT Service Management tool. To determine the actual process the HU wants to use Process Mining. Therefore the research question for this study is: ‘How is Process Mining applicable to determine the actual Incident Management process and align this to the existing process model descriptions?’ For this research a case study is performed using Process Mining to check if the actual process resembles like the predefined process. The findings show that it is not possible to mine the process within the scope of the predefined process. The event data are too limited in granularity. From this we conclude that adjustment of the granularity of the given process model to the granularity of the used event data or vice versa is important

    Joint Intermodal and Intramodal Label Transfers for Extremely Rare or Unseen Classes

    Full text link
    In this paper, we present a label transfer model from texts to images for image classification tasks. The problem of image classification is often much more challenging than text classification. On one hand, labeled text data is more widely available than the labeled images for classification tasks. On the other hand, text data tends to have natural semantic interpretability, and they are often more directly related to class labels. On the contrary, the image features are not directly related to concepts inherent in class labels. One of our goals in this paper is to develop a model for revealing the functional relationships between text and image features as to directly transfer intermodal and intramodal labels to annotate the images. This is implemented by learning a transfer function as a bridge to propagate the labels between two multimodal spaces. However, the intermodal label transfers could be undermined by blindly transferring the labels of noisy texts to annotate images. To mitigate this problem, we present an intramodal label transfer process, which complements the intermodal label transfer by transferring the image labels instead when relevant text is absent from the source corpus. In addition, we generalize the inter-modal label transfer to zero-shot learning scenario where there are only text examples available to label unseen classes of images without any positive image examples. We evaluate our algorithm on an image classification task and show the effectiveness with respect to the other compared algorithms.Comment: The paper has been accepted by IEEE Transactions on Pattern Analysis and Machine Intelligence. It will apear in a future issu

    Robust audio indexing for Dutch spoken-word collections

    Get PDF
    Abstract—Whereas the growth of storage capacity is in accordance with widely acknowledged predictions, the possibilities to index and access the archives created is lagging behind. This is especially the case in the oral history domain and much of the rich content in these collections runs the risk to remain inaccessible for lack of robust search technologies. This paper addresses the history and development of robust audio indexing technology for searching Dutch spoken-word collections and compares Dutch audio indexing in the well-studied broadcast news domain with an oral-history case-study. It is concluded that despite significant advances in Dutch audio indexing technology and demonstrated applicability in several domains, further research is indispensable for successful automatic disclosure of spoken-word collections

    Analogy Mining for Specific Design Needs

    Full text link
    Finding analogical inspirations in distant domains is a powerful way of solving problems. However, as the number of inspirations that could be matched and the dimensions on which that matching could occur grow, it becomes challenging for designers to find inspirations relevant to their needs. Furthermore, designers are often interested in exploring specific aspects of a product-- for example, one designer might be interested in improving the brewing capability of an outdoor coffee maker, while another might wish to optimize for portability. In this paper we introduce a novel system for targeting analogical search for specific needs. Specifically, we contribute a novel analogical search engine for expressing and abstracting specific design needs that returns more distant yet relevant inspirations than alternate approaches

    Intelligent Information Access to Linked Data - Weaving the Cultural Heritage Web

    Get PDF
    The subject of the dissertation is an information alignment experiment of two cultural heritage information systems (ALAP): The Perseus Digital Library and Arachne. In modern societies, information integration is gaining importance for many tasks such as business decision making or even catastrophe management. It is beyond doubt that the information available in digital form can offer users new ways of interaction. Also, in the humanities and cultural heritage communities, more and more information is being published online. But in many situations the way that information has been made publicly available is disruptive to the research process due to its heterogeneity and distribution. Therefore integrated information will be a key factor to pursue successful research, and the need for information alignment is widely recognized. ALAP is an attempt to integrate information from Perseus and Arachne, not only on a schema level, but to also perform entity resolution. To that end, technical peculiarities and philosophical implications of the concepts of identity and co-reference are discussed. Multiple approaches to information integration and entity resolution are discussed and evaluated. The methodology that is used to implement ALAP is mainly rooted in the fields of information retrieval and knowledge discovery. First, an exploratory analysis was performed on both information systems to get a first impression of the data. After that, (semi-)structured information from both systems was extracted and normalized. Then, a clustering algorithm was used to reduce the number of needed entity comparisons. Finally, a thorough matching was performed on the different clusters. ALAP helped with identifying challenges and highlighted the opportunities that arise during the attempt to align cultural heritage information systems

    Online visibility of software-related web sites: The case of biomedical text mining tools

    Get PDF
    Supplementary material associated with this article can be found, in the online version, at doi: 10.1016/j.ipm.2018.11.011.Internet, in general, and the WWW, in particular, have become an immediate, practical means of introducing software tools and resources, and most importantly, a key vehicle to attract the attention of the potential users. In this scenario, content organization as well as different development practices may affect the online visibility of the target resource. Therefore, the careful selection, organization and presentation of contents are critical to guarantee that the main features of the target tool can be easily discovered by potential visitors, while ensuring a proper indexation by automatic online systems and resource recognizers. Understanding how software is depicted in scientific manuscripts and comparing these texts with the corresponding online descriptions can help to improve the visibility of the target website. It is particularly relevant to be able to align online descriptions and those found in literature, and use the resulting knowledge to improve software indexing and grouping. Therefore, this paper presents a novel method for formally defining and mining software-related websites and related literature with the ultimate aim of improving the global online visibility of the software. As a proof of concept, the method was used to evaluate the online visibility of biomedical text mining tools. These tools have evolved considerably in the last decades, and are gathering together a heterogeneous development community as well as various user groups. For the most part, these tools are not easily discovered via general search engines. Hence, the proposed method enabled the identification of specific issues regarding the visibility of these online contents and the discussion of some possible improvements.SING group thanks CITI (Centro de InvestigaciĂłn, Transferencia e InnovaciĂłn) from University of Vigo for hosting its IT infrastructure. This work was partially supported by the Portuguese Foundation for Science and Technology (FCT) under the scope of the strategic funding of UID/BIO/04469/2013 unit and COMPETE2020(POCI-01-0145-FEDER-006684).The authors also acknowledge the Ph.D.grants of MartĂ­nPĂ©rez-PĂ©rez and Gael PĂ©rez - RodrĂ­guez, funded by the Xunta de Galicia.info:eu-repo/semantics/publishedVersio

    An information assistant system for the prevention of tunnel vision in crisis management

    Get PDF
    In the crisis management environment, tunnel vision is a set of bias in decision makers’ cognitive process which often leads to incorrect understanding of the real crisis situation, biased perception of information, and improper decisions. The tunnel vision phenomenon is a consequence of both the challenges in the task and the natural limitation in a human being’s cognitive process. An information assistant system is proposed with the purpose of preventing tunnel vision. The system serves as a platform for monitoring the on-going crisis event. All information goes through the system before arrives at the user. The system enhances the data quality, reduces the data quantity and presents the crisis information in a manner that prevents or repairs the user’s cognitive overload. While working with such a system, the users (crisis managers) are expected to be more likely to stay aware of the actual situation, stay open minded to possibilities, and make proper decisions
    • 

    corecore