439 research outputs found

    Predicting Medication Prescription Rankings with Medication Relation Network

    Get PDF
    Medication prescription rankings and demands prediction could benefit both medication consumers and pharmaceutical companies from various aspects. Our study predicts the medication prescription rankings focusing on patients’ medication switch and combination behavior, which is an innovative genre of medication knowledge that could be learned from unstructured patient generated contents. We first construct two supervised machine learning systems for medication references identification and medication relations classification from unstructured patient’s reviews. We further map the medication switch and combination relations into directed and undirected networks respectively. An adjusted transition in and out (ATIO) system is proposed for medication prescription rankings prediction. The proposed system demonstrates the highest positive correlation with actual medication prescription amounts comparing to other network-based measures. In order to predict the prescription demand changes, we compare four predictive regression models. The model incorporated the network-based measure from ATIO system achieve the lowest mean square errors

    Corporate Smart Content Evaluation

    Get PDF
    Nowadays, a wide range of information sources are available due to the evolution of web and collection of data. Plenty of these information are consumable and usable by humans but not understandable and processable by machines. Some data may be directly accessible in web pages or via data feeds, but most of the meaningful existing data is hidden within deep web databases and enterprise information systems. Besides the inability to access a wide range of data, manual processing by humans is effortful, error-prone and not contemporary any more. Semantic web technologies deliver capabilities for machine-readable, exchangeable content and metadata for automatic processing of content. The enrichment of heterogeneous data with background knowledge described in ontologies induces re-usability and supports automatic processing of data. The establishment of “Corporate Smart Content” (CSC) - semantically enriched data with high information content with sufficient benefits in economic areas - is the main focus of this study. We describe three actual research areas in the field of CSC concerning scenarios and datasets applicable for corporate applications, algorithms and research. Aspect- oriented Ontology Development advances modular ontology development and partial reuse of existing ontological knowledge. Complex Entity Recognition enhances traditional entity recognition techniques to recognize clusters of related textual information about entities. Semantic Pattern Mining combines semantic web technologies with pattern learning to mine for complex models by attaching background knowledge. This study introduces the afore-mentioned topics by analyzing applicable scenarios with economic and industrial focus, as well as research emphasis. Furthermore, a collection of existing datasets for the given areas of interest is presented and evaluated. The target audience includes researchers and developers of CSC technologies - people interested in semantic web features, ontology development, automation, extracting and mining valuable information in corporate environments. The aim of this study is to provide a comprehensive and broad overview over the three topics, give assistance for decision making in interesting scenarios and choosing practical datasets for evaluating custom problem statements. Detailed descriptions about attributes and metadata of the datasets should serve as starting point for individual ideas and approaches

    An informatics approach to prioritizing risk assessment for chemicals and chemical combinations based on near-field exposure from consumer products

    Get PDF
    Over 80,000 chemicals are registered under the U.S. Toxic Substances Control Act of 1976, but only a few hundred have been screened for human toxicity. Not even those used in everyday consumer products, and known to have widespread exposure in the general population, have been screened. Toxicity screening is time-consuming, expensive, and complex because simultaneous or sequential exposure to multiple environmental stressors can affect chemical toxicity. Cumulative risk assessments consider multiple stressors but it is impractical to test every chemical combination and environmental stressor to which people are exposed. The goal of this research is to prioritize the chemical ingredients in consumer products and their most prevalent combinations for risk assessment based on likely exposure and retention. This work is motivated by two concerns. The first, as noted above, is the vast number of environmental chemicals with unknown toxicity. Our body burden (or chemical load) is much greater today than a century ago. The second motivating concern is the mounting evidence that many of these chemicals are potentially harmful. This makes us the unwitting participants in a vast, uncontrolled biochemistry experiment. An informatics approach is developed here that uses publicly available data to estimate chemical exposure from everyday consumer products, which account for a significant proportion of overall chemical load. Several barriers have to be overcome in order for this approach to be effective. First, a structured database of consumer products has to be created. Even though such data is largely public, it is not readily available or easily accessible. The requisite consumer product information is retrieved from online retailers. The resulting database contains brand, name, ingredients, and category for tens of thousands of unique products. Second, chemical nomenclature is often ambiguous. Synonymy (i.e., different names for the same chemical) and homonymy (i.e., the same name for different chemicals) are rampant. The PubChem Compound database, and to a lesser extent the Universal Medical Language System, are used to map chemicals to unique identifiers. Third, lists of toxicologically interesting chemicals have to be compiled. Fortunately, several authoritative bodies (e.g., the U.S. Environmental Protection Agency) publish lists of suspected harmful chemicals to be prioritized for risk assessment. Fourth, tabulating the mere presence of potentially harmful chemicals and their co-occurrence within consumer product formulations is not as interesting as quantifying likely exposure based on consumer usage patterns and product usage modes, so product usage patterns from actual consumers are required. A suitable dataset is obtained from the Kantar Worldpanel, a market analysis firm that tracks consumer behavior. Finally, a computationally feasible probabilistic approach has to be developed to estimate likely exposure and retention for individual chemicals and their combinations. The former is defined here as the presence of a chemical in a product used by a consumer. The latter is exposure combined with the relative likelihood that the chemical will be absorbed by the consumer based on a product’s usage mode (e.g., whether the product is rinsed off or left on after use). The results of four separate analyses are presented here to show the efficacy of the informatics approach. The first is a proof-of-concept demonstrating that the first two barriers, creating the consumer product database and dealing with chemical synonymy and homonymy, can be overcome and that the resulting system can measure the per-product prevalence of a small set of target chemicals (55 asthma-associated and endocrine disrupting compounds) and their combinations. A database of 38,975 distinct consumer products and 32,231 distinct ingredient names was created by scraping Drugstore.com, an online retailer. Nearly one-third of the products (11,688 products, 30%) contained ≄1 target chemical and 5,229 products (13%) contained >1. Of the 55 target chemicals, 31 (56%) appear in ≄1 product and 19 (35%) appear under more than one name. The most frequent 3-way chemical combination (2 phenoxyethanol, methyl paraben, and ethyl paraben) appears in 1,059 products. The second analysis demonstrates that the informatics approach can scale to several thousand target chemicals (11,964 environmental chemicals compiled from five authoritative lists). It repeats the proof-of-concept using a larger product sample (55,209 consumer products). In the third analysis, product usage patterns and usage modes are incorporated. This analysis yields unbiased, rational prioritizations of potentially hazardous chemicals and chemical combinations based on their prevalence within a subset of the product sample (29,814 personal care products), combined exposure from multiple products based on actual consumer behavior, and likely chemical retention based on product usage modes. High-ranking chemicals, and combinations thereof, include glycerol; octamethyltrisiloxane; citric acid; titanium dioxide; 1,2 propanediol; octadecan 1 ol; saccharin; hexitol; limonene; linalool; vitamin e; and 2 phenoxyethanol. The fourth analysis is the same as the third except that each authoritative list is prioritized individually for side-by-side comparison. The informatics approach is a viable and rationale way to prioritize chemicals and chemical combinations for risk assessment based on near-field exposure and retention. Compared to spectrographic approaches to chemical detection, the informatics approach has the advantage of a larger product sample, so it often detects chemicals that are missed during spectrographic analysis. However, the informatics approach is limited to the chemicals that are actually listed on product labels. Manufacturers are not required to specify the chemicals in fragrance or flavor mixtures, so the presence of some chemicals may be underestimated. Likewise, chemicals that are not part of the product formulation (e.g., chemicals leached from packaging, degradation byproducts) cannot be detected. Therefore, spectrographic and informatics approaches are complementary

    Computing point-of-view : modeling and simulating judgments of taste

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2006.Includes bibliographical references (p. 153-163).People have rich points-of-view that afford them the ability to judge the aesthetics of people, things, and everyday happenstance; yet viewpoint has an ineffable quality that is hard to articulate in words, let alone capture in computer models. Inspired by cultural theories of taste and identity, this thesis explores end-to-end computational modeling of people's tastes-from model acquisition, to generalization, to application- under various realms. Five aesthetical realms are considered-cultural taste, attitudes, ways of perceiving, taste for food, and sense-of-humor. A person's model is acquired by reading her personal texts, such as a weblog diary, a social network profile, or emails. To generalize a person model, methods such as spreading activation, analogy, and imprimer supplementation are applied to semantic resources and search spaces mined from cultural corpora. Once a generalized model is achieved, a person's tastes are brought to life through perspective-based applications, which afford the exploration of someone else's perspective through interactivity and play. The thesis describes model acquisition systems implemented for each of the five aesthetical realms.(cont.) The techniques of 'reading for affective themes' (RATE), and 'culture mining' are described, along with their enabling technologies, which are commonsense reasoning and textual affect analysis. Finally, six perspective-based applications were implemented to illuminate a range of real-world beneficiaries to person modeling-virtual mentoring, self-reflection, and deep customization.by Xinyu Hugo Liu.Ph.D

    Understanding consumers’ emotions and sensory experience for beauty care products

    Get PDF
    Doctor of PhilosophyDepartment of Food, Nutrition, Dietetics and HealthMartin TalaveraUnderstanding consumer experience related to hedonic, sensory, and emotional aspects of products is the key to driving consumer-centric product design for the beauty care category. This dissertation conducted three independent studies aiming to explore consumer experience of beauty care products from two perspectives: liking and beyond liking (emotions), based on conventional sensory and consumer data and online product reviews. The objective of Chapter 2 was to develop an emotion lexicon that could be used to profile consumers’ emotional responses to beauty care products in sensory and consumer tests. The lexicon was developed in four main steps: sourcing terms from online product reviews, term identification and categorization, term refinement, and term validation. The final emotion lexicon consists of 37 positive emotions and 2 negative emotions. Recommendations on the application of this lexicon to each of the three categories of beauty care (skincare, hair care and makeup) were provided. The validated emotion lexicon from this study is readily applicable to other emotion research for skincare, hair care and makeup. Chapter 3 explored sensory drivers of liking and emotional associations for beauty care products. Hand creams were used as testing samples to be evaluated for sensory characteristics and consumer perception. First, the sensory space (aroma, appearance, texture & skinfeel) of twelve hand creams was profiled by a highly trained descriptive panel using a modified flavor/texture profile approach. Then, seven hand creams selected from the descriptive sensory space were rated for overall liking, emotions using the lexicon developed from Chapter 2, and consumer characterization using check-all-that-apply (CATA) in a home use test (HUT) with a hundred female consumers from the Kansas City area. Cluster analysis and external preference mapping identified three consumer clusters with different liking patterns: the thick & waxy-texture likers, mild scent & low-medium-thickness likers, and strong-scent likers. Consumers with different liking patterns differed in their emotional associations with sensory characteristics of hand creams. However, high intensities of certain aroma attributes seemed to elicit high-arousal emotions for all groups. The findings of this study could guide the development of new hand cream products targeting different consumer segments. Chapter 4 explored consumer experience for hand cream products from the “voice of consumers”-online product reviews. A total of 17, 581 reviews representing 46 hand creams of different brands, price points, and sensory attributes were collected from Amazon and Ulta Beauty using a scraping software. Topic modeling using Latent Dirichlet allocation (LDA) identified five major topics consumers mentioned in these online reviews: greasiness & residue of the product, scent/fragrances of the product, skin feel & efficacy of the product, consumers’ skin issues, and occasions when to apply the product. Term frequency–inverse document frequency (tf-idf) calculated for each rating group suggested that unpleasant scent and overall dissatisfied quality were the main reasons why consumers gave a rating lower than 4 stars. High efficacy and desirable skinfeel were the drivers for 5 stars. These findings highlighted the importance of sensory experience and perception of efficacy in consumers’ whole product experience

    Multimedia Retrieval

    Get PDF

    Topic Modeling for Natural Language Understanding

    Get PDF
    This thesis presents new topic modeling methods to reveal underlying language structures. Topic models have seen many successes in natural language understanding field. Despite these successes, the further and deeper exploration of topic modeling in language processing and understanding requires the study of language itself and remains much to be explored. This thesis is to combine the study of topic modeling with the exploration of language. Two types of language are explored, the normal document texts, and the spoken language texts. The normal document texts include all the written texts, such as the news articles or the research papers. The spoken language text refers to the human speech directed at machines, such as smart phones to obtain a specific service. The main contributions of this thesis fall into two parts. The first part is the extraction of word/topic relation structure through the modeling of word pairs. Although the word/topic and relation structure has long been recognized as the key for language representation and understanding, few researchers explore the actual relation between words/topics simultaneously with statistical modeling. This thesis introduces a pairwise topic model to examine the relation structure of texts. The pairwise topic model is implemented on different document texts, such as news articles, research papers and medical records to get the word/topic transition and topic evolution. Another contribution of this thesis is the topic modeling for spoken language. Spoken language refers to the spoken text directed at machine to obtain a specific service. Spoken language understanding involves processing the spoken language and figure out how it maps to actions the user intents. This thesis explores the semantic and syntactic structure of spoken language in detail and provides the insight into the language structure. Also, a new topic modeling method is proposed to incorporate these linguistic features. The model can also be extended to incorporate prior knowledge, resulting in better interpretation and understanding of spoken language.Ph.D., Information Studies -- Drexel University, 201

    Semantic multimedia analysis using knowledge and context

    Get PDF
    PhDThe difficulty of semantic multimedia analysis can be attributed to the extended diversity in form and appearance exhibited by the majority of semantic concepts and the difficulty to express them using a finite number of patterns. In meeting this challenge there has been a scientific debate on whether the problem should be addressed from the perspective of using overwhelming amounts of training data to capture all possible instantiations of a concept, or from the perspective of using explicit knowledge about the concepts’ relations to infer their presence. In this thesis we address three problems of pattern recognition and propose solutions that combine the knowledge extracted implicitly from training data with the knowledge provided explicitly in structured form. First, we propose a BNs modeling approach that defines a conceptual space where both domain related evi- dence and evidence derived from content analysis can be jointly considered to support or disprove a hypothesis. The use of this space leads to sig- nificant gains in performance compared to analysis methods that can not handle combined knowledge. Then, we present an unsupervised method that exploits the collective nature of social media to automatically obtain large amounts of annotated image regions. By proving that the quality of the obtained samples can be almost as good as manually annotated images when working with large datasets, we significantly contribute towards scal- able object detection. Finally, we introduce a method that treats images, visual features and tags as the three observable variables of an aspect model and extracts a set of latent topics that incorporates the semantics of both visual and tag information space. By showing that the cross-modal depen- dencies of tagged images can be exploited to increase the semantic capacity of the resulting space, we advocate the use of all existing information facets in the semantic analysis of social media

    Semantic approaches to domain template construction and opinion mining from natural language

    Get PDF
    Most of the text mining algorithms in use today are based on lexical representation of input texts, for example bag of words. A possible alternative is to first convert text into a semantic representation, one that captures the text content in a structured way and using only a set of pre-agreed labels. This thesis explores the feasibility of such an approach to two tasks on collections of documents: identifying common structure in input documents (»domain template construction«), and helping users find differing opinions in input documents (»opinion mining«). We first discuss ways of converting natural text to a semantic representation. We propose and compare two new methods with varying degrees of target representation complexity. The first method, showing more promise, is based on dependency parser output which it converts to lightweight semantic frames, with role fillers aligned to WordNet. The second method structures text using Semantic Role Labeling techniques and aligns the output to the Cyc ontology.\ud Based on the first of the above representations, we next propose and evaluate two methods for constructing frame-based templates for documents from a given domain (e.g. bombing attack news reports). A template is the set of all salient attributes (e.g. attacker, number of casualties, \ldots). The idea of both methods is to construct abstract frames for which more specific instances (according to the WordNet hierarchy) can be found in the input documents. Fragments of these abstract frames represent the sought-for attributes. We achieve state of the art performance and additionally provide detailed type constraints for the attributes, something not possible with competing methods. Finally, we propose a software system for exposing differing opinions in the news. For any given event, we present the user with all known articles on the topic and let them navigate them by three semantic properties simultaneously: sentiment, topical focus and geography of origin. The result is a dynamically reranked set of relevant articles and a near real time focused summary of those articles. The summary, too, is computed from the semantic text representation discussed above. We conducted a user study of the whole system with very positive results
    • 

    corecore