30,630 research outputs found

    Topic Map Generation Using Text Mining

    Get PDF
    Starting from text corpus analysis with linguistic and statistical analysis algorithms, an infrastructure for text mining is described which uses collocation analysis as a central tool. This text mining method may be applied to different domains as well as languages. Some examples taken form large reference databases motivate the applicability to knowledge management using declarative standards of information structuring and description. The ISO/IEC Topic Map standard is introduced as a candidate for rich metadata description of information resources and it is shown how text mining can be used for automatic topic map generation

    Topic Maps as a Virtual Observatory tool

    Get PDF
    One major component of the VO will be catalogs measuring gigabytes and terrabytes if not more. Some mechanism like XML will be used for structuring the information. However, such mechanisms are not good for information retrieval on their own. For retrieval we use queries. Topic Maps that have started becoming popular recently are excellent for segregating information that results from a query. A Topic Map is a structured network of hyperlinks above an information pool. Different Topic Maps can form different layers above the same information pool and provide us with different views of it. This facilitates in being able to ask exact questions, aiding us in looking for gold needles in the proverbial haystack. Here we discuss the specifics of what Topic Maps are and how they can be implemented within the VO framework. URL: http://www.astro.caltech.edu/~aam/science/topicmaps/Comment: 11 pages, 5 eps figures, to appear in SPIE Annual Meeting 2001 proceedings (Astronomical Data Analysis), uses spie.st

    The development of local solar irradiance for outdoor computer graphics rendering

    Get PDF
    Atmospheric effects are approximated by solving the light transfer equation, LTE, of a given viewing path. The resulting accumulated spectral energy (its visible band) arriving at the observer’s eyes, defines the colour of the object currently on the line of sight. Due to the convenience of using a single rendering equation to solve the LTE for daylight sky and distant objects (aerial perspective), recent methods had opt for a similar kind of approach. Alas, the burden that the real-time calculation brings to the foil had forced these methods to make simplifications that were not in line with the actual world observation. Consequently, the results of these methods are laden with visual-errors. The two most common simplifications made were: i) assuming the atmosphere as a full-scattering medium only and ii) assuming a single density atmosphere profile. This research explored the possibility of replacing the real-time calculation involved in solving the LTE with an analytical-based approach. Hence, the two simplifications made by the previous real-time methods can be avoided. The model was implemented on top of a flight simulator prototype system since the requirements of such system match the objectives of this study. Results were verified against the actual images of the daylight skies. Comparison was also made with the previous methods’ results to showcase the proposed model strengths and advantages over its peers

    Linked Data - the story so far

    No full text
    The term “Linked Data” refers to a set of best practices for publishing and connecting structured data on the Web. These best practices have been adopted by an increasing number of data providers over the last three years, leading to the creation of a global data space containing billions of assertions— the Web of Data. In this article, the authors present the concept and technical principles of Linked Data, and situate these within the broader context of related technological developments. They describe progress to date in publishing Linked Data on the Web, review applications that have been developed to exploit the Web of Data, and map out a research agenda for the Linked Data community as it moves forward

    Saliency Prediction for Mobile User Interfaces

    Full text link
    We introduce models for saliency prediction for mobile user interfaces. A mobile interface may include elements like buttons, text, etc. in addition to natural images which enable performing a variety of tasks. Saliency in natural images is a well studied area. However, given the difference in what constitutes a mobile interface, and the usage context of these devices, we postulate that saliency prediction for mobile interface images requires a fresh approach. Mobile interface design involves operating on elements, the building blocks of the interface. We first collected eye-gaze data from mobile devices for free viewing task. Using this data, we develop a novel autoencoder based multi-scale deep learning model that provides saliency prediction at the mobile interface element level. Compared to saliency prediction approaches developed for natural images, we show that our approach performs significantly better on a range of established metrics.Comment: Paper accepted at WACV 201

    AutoDiscern: Rating the Quality of Online Health Information with Hierarchical Encoder Attention-based Neural Networks

    Get PDF
    Patients increasingly turn to search engines and online content before, or in place of, talking with a health professional. Low quality health information, which is common on the internet, presents risks to the patient in the form of misinformation and a possibly poorer relationship with their physician. To address this, the DISCERN criteria (developed at University of Oxford) are used to evaluate the quality of online health information. However, patients are unlikely to take the time to apply these criteria to the health websites they visit. We built an automated implementation of the DISCERN instrument (Brief version) using machine learning models. We compared the performance of a traditional model (Random Forest) with that of a hierarchical encoder attention-based neural network (HEA) model using two language embeddings, BERT and BioBERT. The HEA BERT and BioBERT models achieved average F1-macro scores across all criteria of 0.75 and 0.74, respectively, outperforming the Random Forest model (average F1-macro = 0.69). Overall, the neural network based models achieved 81% and 86% average accuracy at 100% and 80% coverage, respectively, compared to 94% manual rating accuracy. The attention mechanism implemented in the HEA architectures not only provided 'model explainability' by identifying reasonable supporting sentences for the documents fulfilling the Brief DISCERN criteria, but also boosted F1 performance by 0.05 compared to the same architecture without an attention mechanism. Our research suggests that it is feasible to automate online health information quality assessment, which is an important step towards empowering patients to become informed partners in the healthcare process
    • …
    corecore