262 research outputs found

    Locations of Practice: The Social Production of Locative Media

    Get PDF
    Locative media is a descriptive term that designates the artistic deployment of an assemblage of mobile and location aware technologies in the production of site-specific experiences or installations for public spaces. It has been described as a ‘test-category’ or ‘mobile media movement’ through which a wide gamut of individuals and collectives explore the possibilities of emerging mobile and location-based technologies. Underlying theoretical concerns have focused, for instance, on: reconfigurations of understandings and experiences of space; associations with psychogeography; potential for grass roots activist applications; and, the dependency on technological infrastructures associated with power and control. A fundamental tension exists between the tools employed in production, those being commercial technologies, and the rhetoric of locative media practice, which posits these technologies as deployable beyond command and control infrastructures. Concealed within this tension is the manner in which locative media production abuts the commercial uptake of mobile and location-based technologies, and the specific practices that support the appropriation of commercial channels for non-commercial means. This thesis engages with circumstances that enable (or not) locative media production. Locative media is framed as a consequence of social relations, and, as a field of cultural production set within contextual and contingent conditions that circumscribe practice. In focusing on the conditions of production, that is, the processes through which locative media experiences are constructed, I provide site-specific interpretations through two case studies. The analysis elucidates what is not readily apparent in a final aesthetic experience and reveals the conditions and constraints of production, including the manner in which certain practices are legitimized, disavowed and contradicted. The practices to ensue from these particular sites of production are not representative of the entire field of locative media. These engagements articulate specific locations of practice; the physical and symbolic spaces that support the production of locative media, and it is within these spaces of production that practices emerge

    Combining SOA and BPM Technologies for Cross-System Process Automation

    Get PDF
    This paper summarizes the results of an industry case study that introduced a cross-system business process automation solution based on a combination of SOA and BPM standard technologies (i.e., BPMN, BPEL, WSDL). Besides discussing major weaknesses of the existing, custom-built, solution and comparing them against experiences with the developed prototype, the paper presents a course of action for transforming the current solution into the proposed solution. This includes a general approach, consisting of four distinct steps, as well as specific action items that are to be performed for every step. The discussion also covers language and tool support and challenges arising from the transformation

    Annual Report of the University, 1999-2000, Volumes 1-4

    Get PDF
    The Robert O. Anderson School and Graduate School of Management at The University of New Mexico Period of Report: July 1, 1999 to June 30, 2000 Submitted by Howard L. Smith, Dean The Anderson Schools of Management is divided into four distinct divisions- the Department of Accounting; the Department of Finance, International and Technology Management; the Department of Marketing, Information and Decision Sciences; and the Department of Organizational Studies. This structure provides an opportunity for The Anderson Schools to develop four distinct areas of excellence, proven by results reported here. I. Significant Developments During the Academic Year The Anderson Schools of Management • As a result of the multi-year gift from the Ford Motor Company, completed renovation of The Schools\u27 Advisement and Placement Center, as well as all student organization offices. • The Ford gift also provided for $100,000 to support faculty research, case studies and course development. • The Schools revised the MBA curriculum to meet the changing needs of professional, advanced business education. • The Schools updated computer laboratory facilities, with the addition of a 45-unit cluster for teaching and student work. • The faculty and staff of The Schools furthered outreach in economic development activities by participating directly as committee members and leaders in the cluster workgroups of the Next Generation Economy Initiative. • The faculty, staff and students of The Schools contributed to the development of the Ethics in Business Awards; particularly exciting was the fact that all nominee packages were developed by student teams from The Anderson Schools. • The Schools continue to generate more credit hours per faculty member than any other division of the UNM community. The Accounting Department • Preparation and presentation of a progress report to accrediting body, the AACSB. The Department of Finance, International and Technology Management • The Department continued to focus on expansion of the Management of Technology program as a strategic strength of The Schools. The Department of Marketing. Information and Decision Sciences • Generated 9022 credit hours, with a student enrollment of 3070. The Department of Organizational Studies • Coordinated the 9th UNM Universidad de Guanajuato (UG) Mexico Student Exchange

    Document analysis by means of data mining techniques

    Get PDF
    The huge amount of textual data produced everyday by scientists, journalists and Web users, allows investigating many different aspects of information stored in the published documents. Data mining and information retrieval techniques are exploited to manage and extract information from huge amount of unstructured textual data. Text mining also known as text data mining is the processing of extracting high quality information (focusing relevance, novelty and interestingness) from text by identifying patterns etc. Text mining typically involves the process of structuring input text by means of parsing and other linguistic features or sometimes by removing extra data and then finding patterns from structured data. Patterns are then evaluated at last and interpretation of output is performed to accomplish the desired task. Recently, text mining has got attention in several fields such as in security (involves analysis of Internet news), for commercial (for search and indexing purposes) and in academic departments (such as answering query). Beyond searching the documents consisting the words given in a user query, text mining may provide direct answer to user by semantic web for content based (content meaning and its context). It can also act as intelligence analyst and can also be used in some email spam filters for filtering out unwanted material. Text mining usually includes tasks such as clustering, categorization, sentiment analysis, entity recognition, entity relation modeling and document summarization. In particular, summarization approaches are suitable for identifying relevant sentences that describe the main concepts presented in a document dataset. Furthermore, the knowledge existed in the most informative sentences can be employed to improve the understanding of user and/or community interests. Different approaches have been proposed to extract summaries from unstructured text documents. Some of them are based on the statistical analysis of linguistic features by means of supervised machine learning or data mining methods, such as Hidden Markov models, neural networks and Naive Bayes methods. An appealing research field is the extraction of summaries tailored to the major user interests. In this context, the problem of extracting useful information according to domain knowledge related to the user interests is a challenging task. The main topics have been to study and design of novel data representations and data mining algorithms useful for managing and extracting knowledge from unstructured documents. This thesis describes an effort to investigate the application of data mining approaches, firmly established in the subject of transactional data (e.g., frequent itemset mining), to textual documents. Frequent itemset mining is a widely exploratory technique to discover hidden correlations that frequently occur in the source data. Although its application to transactional data is well-established, the usage of frequent itemsets in textual document summarization has never been investigated so far. A work is carried on exploiting frequent itemsets for the purpose of multi-document summarization so a novel multi-document summarizer, namely ItemSum (Itemset-based Summarizer) is presented, that is based on an itemset-based model, i.e., a framework comprise of frequent itemsets, taken out from the document collection. Highly representative and not redundant sentences are selected for generating summary by considering both sentence coverage, with respect to a sentence relevance score, based on tf-idf statistics, and a concise and highly informative itemset-based model. To evaluate the ItemSum performance a suite of experiments on a collection of news articles has been performed. Obtained results show that ItemSum significantly outperforms mostly used previous summarizers in terms of precision, recall, and F-measure. We also validated our approach against a large number of approaches on the DUC’04 document collection. Performance comparisons, in terms of precision, recall, and F-measure, have been performed by means of the ROUGE toolkit. In most cases, ItemSum significantly outperforms the considered competitors. Furthermore, the impact of both the main algorithm parameters and the adopted model coverage strategy on the summarization performance are investigated as well. In some cases, the soundness and readability of the generated summaries are unsatisfactory, because the summaries do not cover in an effective way all the semantically relevant data facets. A step beyond towards the generation of more accurate summaries has been made by semantics-based summarizers. Such approaches combine the use of general-purpose summarization strategies with ad-hoc linguistic analysis. The key idea is to also consider the semantics behind the document content to overcome the limitations of general-purpose strategies in differentiating between sentences based on their actual meaning and context. Most of the previously proposed approaches perform the semantics-based analysis as a preprocessing step that precedes the main summarization process. Therefore, the generated summaries could not entirely reflect the actual meaning and context of the key document sentences. In contrast, we aim at tightly integrating the ontology-based document analysis into the summarization process in order to take the semantic meaning of the document content into account during the sentence evaluation and selection processes. With this in mind, we propose a new multi-document summarizer, namely Yago-based Summarizer, that integrates an established ontology-based entity recognition and disambiguation step. Named Entity Recognition from Yago ontology is being used for the task of text summarization. The Named Entity Recognition (NER) task is concerned with marking occurrences of a specific object being mentioned. These mentions are then classified into a set of predefined categories. Standard categories include “person”, “location”, “geo-political organization”, “facility”, “organization”, and “time”. The use of NER in text summarization improved the summarization process by increasing the rank of informative sentences. To demonstrate the effectiveness of the proposed approach, we compared its performance on the DUC’04 benchmark document collections with that of a large number of state-of-the-art summarizers. Furthermore, we also performed a qualitative evaluation of the soundness and readability of the generated summaries and a comparison with the results that were produced by the most effective summarizers. A parallel effort has been devoted to integrating semantics-based models and the knowledge acquired from social networks into a document summarization model named as SociONewSum. The effort addresses the sentence-based generic multi-document summarization problem, which can be formulated as follows: given a collection of news articles ranging over the same topic, the goal is to extract a concise yet informative summary, which consists of most salient document sentences. An established ontological model has been used to improve summarization performance by integrating a textual entity recognition and disambiguation step. Furthermore, the analysis of the user-generated content coming from Twitter has been exploited to discover current social trends and improve the appealing of the generated summaries. An experimental evaluation of the SociONewSum performance was conducted on real English-written news article collections and Twitter posts. The achieved results demonstrate the effectiveness of the proposed summarizer, in terms of different ROUGE scores, compared to state-of-the-art open source summarizers as well as to a baseline version of the SociONewSum summarizer that does not perform any UGC analysis. Furthermore, the readability of the generated summaries has also been analyzed

    Undergraduate Bulletin of the University of San Diego 2006-2008

    Get PDF
    282 pages : illustrations, photographs ; 28 cmhttps://digital.sandiego.edu/coursecatalogs-undergrad/1019/thumbnail.jp

    Lingnan College Hong Kong : President\u27s report 1989-1990

    Full text link
    https://commons.ln.edu.hk/lingnan_annualreport/1009/thumbnail.jp

    Reports to the President

    Get PDF
    A compilation of annual reports for the 1999-2000 academic year, including a report from the President of the Massachusetts Institute of Technology, as well as reports from the academic and administrative units of the Institute. The reports outline the year's goals, accomplishments, honors and awards, and future plans

    Catalog Denison University 2011-2012

    Get PDF
    Denison University Course Catalog 2011-2012https://digitalcommons.denison.edu/denisoncatalogs/1108/thumbnail.jp

    Natural Language Processing: Emerging Neural Approaches and Applications

    Get PDF
    This Special Issue highlights the most recent research being carried out in the NLP field to discuss relative open issues, with a particular focus on both emerging approaches for language learning, understanding, production, and grounding interactively or autonomously from data in cognitive and neural systems, as well as on their potential or real applications in different domains

    Catalog Denison University 2010-2011

    Get PDF
    Denison University Course Catalog 2010-2011https://digitalcommons.denison.edu/denisoncatalogs/1107/thumbnail.jp
    • …
    corecore