38 research outputs found

    Highly focused document retrieval in aerospace engineering : user interaction design and evaluation

    Get PDF
    Purpose – This paper seeks to describe the preliminary studies (on both users and data), the design and evaluation of the K-Search system for searching legacy documents in aerospace engineering. Real-world reports of jet engine maintenance challenge the current indexing practice, while real users’ tasks require retrieving the information in the proper context. K-Search is currently in use in Rolls-Royce plc and has evolved to include other tools for knowledge capture and management. Design/methodology/approach – Semantic Web techniques have been used to automatically extract information from the reports while maintaining the original context, allowing a more focused retrieval than with more traditional techniques. The paper combines semantic search with classical information retrieval to increase search effectiveness. An innovative user interface has been designed to take advantage of this hybrid search technique. The interface is designed to allow a flexible and personal approach to searching legacy data. Findings – The user evaluation showed that the system is effective and well received by users. It also shows that different people look at the same data in different ways and make different use of the same system depending on their individual needs, influenced by their job profile and personal attitude. Research limitations/implications – This study focuses on a specific case of an enterprise working in aerospace engineering. Although the findings are likely to be shared with other engineering domains (e.g. mechanical, electronic), the study does not expand the evaluation to different settings. Originality/value – The study shows how real context of use can provide new and unexpected challenges to researchers and how effective solutions can then be adopted and used in organizations.</p

    ProHealth eCoach: user-centered design and development of an eCoach app to promote healthy lifestyle with personalized activity recommendations

    Get PDF
    Background: Regular physical activity (PA), healthy habits, and an appropriate diet are recommended guidelines to maintain a healthy lifestyle. A healthy lifestyle can help to avoid chronic diseases and long-term illnesses. A monitoring and automatic personalized lifestyle recommendation system (i.e., automatic electronic coach or eCoach) with considering clinical and ethical guidelines, individual health status, condition, and preferences may successfully help participants to follow recommendations to maintain a healthy lifestyle. As a prerequisite for the prototype design of such a helpful eCoach system, it is essential to involve the end-users and subject-matter experts throughout the iterative design process. Methods: We used an iterative user-centered design (UCD) approach to understend context of use and to collect qualitative data to develop a roadmap for self-management with eCoaching. We involved researchers, non-technical and technical, health professionals, subject-matter experts, and potential end-users in design process. We designed and developed the eCoach prototype in two stages, adopting diferent phases of the iterative design process. In design workshop 1, we focused on identifying end-users, understanding the user’s context, specifying user requirements, designing and developing an initial low-fdelity eCoach prototype. In design workshop 2, we focused on maturing the low-fdelity solution design and development for the visualization of continuous and discrete data, artifcial intelligence (AI)-based interval forecasting, personalized recommendations, and activity goals. Results: The iterative design process helped to develop a working prototype of eCoach system that meets end-user’s requirements and expectations towards an efective recommendation visualization, considering diversity in culture, quality of life, and human values. The design provides an early version of the solution, consisting of wearable technology, a mobile app following the “Google Material Design” guidelines, and web content for self-monitoring, goal setting, and lifestyle recommendations in an engaging manner between the eCoach app and end-users. Conclusions: The adopted iterative design process brings in a design focus on the user and their needs at each phase. Throughout the design process, users have been involved at the heart of the design to create a working.publishedVersio

    A Semantic Wiki-based Platform for IT Service Management

    Get PDF
    The book researches the use of a semantic wiki in the area of IT Service Management within the IT department of an SME. An emphasis of the book lies in the design and prototypical implementation of tools for the integration of ITSM-relevant information into the semantic wiki, as well as tools for interactions between the wiki and external programs. The result of the book is a platform for agile, semantic wiki-based ITSM for IT administration teams of SMEs

    The Future of Origin of Life Research: Bridging Decades-Old Divisions.

    Get PDF
    Research on the origin of life is highly heterogeneous. After a peculiar historical development, it still includes strongly opposed views which potentially hinder progress. In the 1st Interdisciplinary Origin of Life Meeting, early-career researchers gathered to explore the commonalities between theories and approaches, critical divergence points, and expectations for the future. We find that even though classical approaches and theories-e.g. bottom-up and top-down, RNA world vs. metabolism-first-have been prevalent in origin of life research, they are ceasing to be mutually exclusive and they can and should feed integrating approaches. Here we focus on pressing questions and recent developments that bridge the classical disciplines and approaches, and highlight expectations for future endeavours in origin of life research

    Proceedings of the Second International Workshop on Physicality, Physicality 2007

    Get PDF

    Enhancing Data Classification Quality of Volunteered Geographic Information

    Get PDF
    Geographic data is one of the fundamental components of any Geographic Information System (GIS). Nowadays, the utility of GIS becomes part of everyday life activities, such as searching for a destination, planning a trip, looking for weather information, etc. Without a reliable data source, systems will not provide guaranteed services. In the past, geographic data was collected and processed exclusively by experts and professionals. However, the ubiquity of advanced technology results in the evolution of Volunteered Geographic Information (VGI), when the geographic data is collected and produced by the general public. These changes influence the availability of geographic data, when common people can work together to collect geographic data and produce maps. This particular trend is known as collaborative mapping. In collaborative mapping, the general public shares an online platform to collect, manipulate, and update information about geographic features. OpenStreetMap (OSM) is a prominent example of a collaborative mapping project, which aims to produce a free world map editable and accessible by anyone. During the last decade, VGI has expanded based on the power of crowdsourcing. The involvement of the public in data collection raises great concern about the resulting data quality. There exist various perspectives of geographic data quality this dissertation focuses particularly on the quality of data classification (i.e., thematic accuracy). In professional data collection, data is classified based on quantitative and/or qualitative ob- servations. According to a pre-defined classification model, which is usually constructed by experts, data is assigned to appropriate classes. In contrast, in most collaborative mapping projects data classification is mainly based on individualsa cognition. Through online platforms, contributors collect information about geographic features and trans- form their perceptions into classified entities. In VGI projects, the contributors mostly have limited experience in geography and cartography. Therefore, the acquired data may have a questionable classification quality. This dissertation investigates the challenges of data classification in VGI-based mapping projects (i.e., collaborative mapping projects). In particular, it lists the challenges relevant to the evolution of VGI as well as to the characteristics of geographic data. Furthermore, this work proposes a guiding approach to enhance the data classification quality in such projects. The proposed approach is based on the following premises (i) the availability of large amounts of data, which fosters applying machine learning techniques to extract useful knowledge, (ii) utilization of the extracted knowledge to guide contributors to appropriate data classification, (iii) the humanitarian spirit of contributors to provide precise data, when they are supported by a guidance system, and (iv) the power of crowdsourcing in data collection as well as in ensuring the data quality. This cumulative dissertation consists of five peer-reviewed publications in international conference proceedings and international journals. The publications divide the disser- tation into three parts the first part presents a comprehensive literature review about the relevant previous work of VGI quality assurance procedures (Chapter 2), the second part studies the foundations of the approach (Chapters 3-4), and the third part discusses the proposed approach and provides a validation example for implementing the approach (Chapters 5-6). Furthermore, Chapter 1 presents an overview about the research ques- tions and the adapted research methodology, while Chapter 7 concludes the findings and summarizes the contributions. The proposed approach is validated through empirical studies and an implemented web application. The findings reveal the feasibility of the proposed approach. The output shows that applying the proposed approach results in enhanced data classification quality. Furthermore, the research highlights the demands for intuitive data collection and data interpretation approaches adequate to VGI-based mapping projects. An interaction data collection approach is required to guide the contributors toward enhanced data quality, while an intuitive data interpretation approach is needed to derive more precise information from rich VGI resources

    Adaptive object-modeling : patterns, tools and applications

    Get PDF
    Tese de Programa Doutoral. InformĂĄtica. Universidade do Porto. Faculdade de Engenharia. 201

    A New Service on the Texas Legal Horizon: Texas Supreme Court Index

    Get PDF
    Since the late 1950\u27s the Texas Supreme Court Journal has been a mainstay of Texas lawyers, offering speedy copies of all supreme court opinions as well as writ dispositions. Though the Texas Supreme Court Journal is invaluable, it does have one major deficiency. Marian Boner\u27s Reference Guide to Texas Law and Legal History puts the problem succinctly: There is no cumulation, and the cases are not indexed. A new publication, Texas Supreme Court Index+, is now making a creditable bid to fill that gap. A weekly service originating in Houston, the Index+ contains up-to-date data on the disposition of appeals by the Texas Supreme Court. The Texas Supreme Court Index+ is not designed to replace or compete with the Journal. It does not contain the text of opinions, synopses of applications granted, or the points of error upon which they were granted What the Texas Supreme Court Index+ does provide is a convenient and rapid way to ascertain the current status of a case pending before the supreme court. The Index + is an 8 \u27/2 by 11 photocopy of a computer printout. A case can be found either alphabetically by its name, through a direct and reverse index to case styles, or by checking the court of appeals citation, in much the same way as one would use the West writs tables. The citation index gives the current case status; the direct index provides full procedural history, including motions, date of argument, and so on

    Data Mining Algorithms for Internet Data: from Transport to Application Layer

    Get PDF
    Nowadays we live in a data-driven world. Advances in data generation, collection and storage technology have enabled organizations to gather data sets of massive size. Data mining is a discipline that blends traditional data analysis methods with sophisticated algorithms to handle the challenges posed by these new types of data sets. The Internet is a complex and dynamic system with new protocols and applications that arise at a constant pace. All these characteristics designate the Internet a valuable and challenging data source and application domain for a research activity, both looking at Transport layer, analyzing network tra c flows, and going up to Application layer, focusing on the ever-growing next generation web services: blogs, micro-blogs, on-line social networks, photo sharing services and many other applications (e.g., Twitter, Facebook, Flickr, etc.). In this thesis work we focus on the study, design and development of novel algorithms and frameworks to support large scale data mining activities over huge and heterogeneous data volumes, with a particular focus on Internet data as data source and targeting network tra c classification, on-line social network analysis, recommendation systems and cloud services and Big data
    corecore