46 research outputs found

    The UCF Report, Vol. 08 No. 13, October 16, 1985

    Get PDF
    DOD contracts the prize: Governor\u27s help in Washington escalates UCF\u27s research hopes; Holsenbeck named to UCF post; Provost Ellis tells plans to teach before retiring; UCF Theatre to open with British comedy; Faculty Senate to debate student complaint policy

    H.M. Briggs Library Serials List: 24th Edition

    Get PDF

    Cultural impacts on web: An empirical comparison of interactivity in websites of South Korea and the United Kingdom

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and was awarded by Brunel UniversityThis thesis explores cultural differences on interactive design features used in websites of South Korea and the United Kingdom from the perspective of both: professional website designers and end-users. It also investigates how the use of interactive design features from different cultures change over time. Four interaction types on websites; User to Interface (U2I), User to Content (U2C), User to Provider (U2P), and User to User (U2U) interactivity, and three interaction types on blogs; Blogger to Interface (B2I), Blogger to Content (B2C) and Blogger to Blogger (B2B) interactivity have been identified. Four cultural dimensions were used for the theoretical base of this study based on which four hypotheses were proposed in relation to the interaction types identified above; (a) High versus Low Context cultures for U2I, (b) High versus Low Uncertainty Avoidance for U2C, (c) High versus Low Power Distance for U2P and (d) Individualism versus Collectivism for U2U interactivity, in order to discover the effects of national cultures on interactivity in websites. We derived our own interactivity dimensions and mapped them to the four interaction types for websites and three for blogs. Interactive design features were derived from interactivity dimensions and examined in our studies. The findings revealed that there have been some changes towards homogeneity in the use of interactive design features on charity websites between South Korea and United Kingdom although there is still evidence of some cultural differences. With regard to end-users’ perspective, the result show that the use of interactive design features of blogs may be influenced by culture but this is only within a certain context. The findings also provide a valuable indication that users interacting within the same blog service can be considered as being shared concerns rather than shared national location, thus create a particular type of community in which bloggers are affected by social influence so they adopt a shared set of value, preferences and style that would indicate almost a common social culture. As a result, the cultural differences derived from their country of origin do not have that much impact

    A Digital Game Maturity Model

    Get PDF
    Game development is an interdisciplinary concept that embraces artistic, software engineering, management, and business disciplines. Game development is considered as one of the most complex tasks in software engineering. Hence, for successful development of good-quality games, the game developers must consider and explore all related dimensions as well as discussing them with the stakeholders involved. This research facilitates a better understanding of important dimensions of digital game development methodology. The increased popularity of digital games, the challenges faced by game development organizations in developing quality games, and severe competition in the digital game industry demand a game development process maturity assessment. Consequently, this study presents a Digital Game Maturity Model to evaluate the current development methodology in an organization. The objective is first to identify key factors in the game development process, then to classify these factors into target groups, and eventually to use this grouping as a theoretical basis for proposing a maturity model for digital game development. In doing so, the research focuses on three major stakeholders in game development: developers, consumers, and business management. The framework of the proposed model consists of assessment questionnaires made up of key identified factors from three empirical studies, a performance scale, and a rating method. The main goal of the questionnaires is to collect information about current processes and practices. This research contributes towards formulating a comprehensive and unified strategy for game development process maturity assessment. The proposed model was evaluated with two case studies from the digital game industry

    Mobilizing the Past for a Digital Future : The Potential of Digital Archaeology

    Get PDF
    Mobilizing the Past is a collection of 20 articles that explore the use and impact of mobile digital technology in archaeological field practice. The detailed case studies present in this volume range from drones in the Andes to iPads at Pompeii, digital workflows in the American Southwest, and examples of how bespoke, DIY, and commercial software provide solutions and craft novel challenges for field archaeologists. The range of projects and contexts ensures that Mobilizing the Past for a Digital Future is far more than a state-of-the-field manual or technical handbook. Instead, the contributors embrace the growing spirit of critique present in digital archaeology. This critical edge, backed by real projects, systems, and experiences, gives the book lasting value as both a glimpse into present practices as well as the anxieties and enthusiasm associated with the most recent generation of mobile digital tools. This book emerged from a workshop funded by the National Endowment for the Humanities held in 2015 at Wentworth Institute of Technology in Boston. The workshop brought together over 20 leading practitioners of digital archaeology in the U.S. for a weekend of conversation. The papers in this volume reflect the discussions at this workshop with significant additional content. Starting with an expansive introduction and concluding with a series of reflective papers, this volume illustrates how tablets, connectivity, sophisticated software, and powerful computers have transformed field practices and offer potential for a radically transformed discipline.https://dc.uwm.edu/arthist_mobilizingthepast/1000/thumbnail.jp

    Toponym Resolution in Text

    Get PDF
    Institute for Communicating and Collaborative SystemsBackground. In the area of Geographic Information Systems (GIS), a shared discipline between informatics and geography, the term geo-parsing is used to describe the process of identifying names in text, which in computational linguistics is known as named entity recognition and classification (NERC). The term geo-coding is used for the task of mapping from implicitly geo-referenced datasets (such as structured address records) to explicitly geo-referenced representations (e.g., using latitude and longitude). However, present-day GIS systems provide no automatic geo-coding functionality for unstructured text. In Information Extraction (IE), processing of named entities in text has traditionally been seen as a two-step process comprising a flat text span recognition sub-task and an atomic classification sub-task; relating the text span to a model of the world has been ignored by evaluations such as MUC or ACE (Chinchor (1998); U.S. NIST (2003)). However, spatial and temporal expressions refer to events in space-time, and the grounding of events is a precondition for accurate reasoning. Thus, automatic grounding can improve many applications such as automatic map drawing (e.g. for choosing a focus) and question answering (e.g. , for questions like How far is London from Edinburgh?, given a story in which both occur and can be resolved). Whereas temporal grounding has received considerable attention in the recent past (Mani and Wilson (2000); Setzer (2001)), robust spatial grounding has long been neglected. Concentrating on geographic names for populated places, I define the task of automatic Toponym Resolution (TR) as computing the mapping from occurrences of names for places as found in a text to a representation of the extensional semantics of the location referred to (its referent), such as a geographic latitude/longitude footprint. The task of mapping from names to locations is hard due to insufficient and noisy databases, and a large degree of ambiguity: common words need to be distinguished from proper names (geo/non-geo ambiguity), and the mapping between names and locations is ambiguous (London can refer to the capital of the UK or to London, Ontario, Canada, or to about forty other Londons on earth). In addition, names of places and the boundaries referred to change over time, and databases are incomplete. Objective. I investigate how referentially ambiguous spatial named entities can be grounded, or resolved, with respect to an extensional coordinate model robustly on open-domain news text. I begin by comparing the few algorithms proposed in the literature, and, comparing semiformal, reconstructed descriptions of them, I factor out a shared repertoire of linguistic heuristics (e.g. rules, patterns) and extra-linguistic knowledge sources (e.g. population sizes). I then investigate how to combine these sources of evidence to obtain a superior method. I also investigate the noise effect introduced by the named entity tagging step that toponym resolution relies on in a sequential system pipeline architecture. Scope. In this thesis, I investigate a present-day snapshot of terrestrial geography as represented in the gazetteer defined and, accordingly, a collection of present-day news text. I limit the investigation to populated places; geo-coding of artifact names (e.g. airports or bridges), compositional geographic descriptions (e.g. 40 miles SW of London, near Berlin), for instance, is not attempted. Historic change is a major factor affecting gazetteer construction and ultimately toponym resolution. However, this is beyond the scope of this thesis. Method. While a small number of previous attempts have been made to solve the toponym resolution problem, these were either not evaluated, or evaluation was done by manual inspection of system output instead of curating a reusable reference corpus. Since the relevant literature is scattered across several disciplines (GIS, digital libraries, information retrieval, natural language processing) and descriptions of algorithms are mostly given in informal prose, I attempt to systematically describe them and aim at a reconstruction in a uniform, semi-formal pseudo-code notation for easier re-implementation. A systematic comparison leads to an inventory of heuristics and other sources of evidence. In order to carry out a comparative evaluation procedure, an evaluation resource is required. Unfortunately, to date no gold standard has been curated in the research community. To this end, a reference gazetteer and an associated novel reference corpus with human-labeled referent annotation are created. These are subsequently used to benchmark a selection of the reconstructed algorithms and a novel re-combination of the heuristics catalogued in the inventory. I then compare the performance of the same TR algorithms under three different conditions, namely applying it to the (i) output of human named entity annotation, (ii) automatic annotation using an existing Maximum Entropy sequence tagging model, and (iii) a na¨ıve toponym lookup procedure in a gazetteer. Evaluation. The algorithms implemented in this thesis are evaluated in an intrinsic or component evaluation. To this end, we define a task-specific matching criterion to be used with traditional Precision (P) and Recall (R) evaluation metrics. This matching criterion is lenient with respect to numerical gazetteer imprecision in situations where one toponym instance is marked up with different gazetteer entries in the gold standard and the test set, respectively, but where these refer to the same candidate referent, caused by multiple near-duplicate entries in the reference gazetteer. Main Contributions. The major contributions of this thesis are as follows: • A new reference corpus in which instances of location named entities have been manually annotated with spatial grounding information for populated places, and an associated reference gazetteer, from which the assigned candidate referents are chosen. This reference gazetteer provides numerical latitude/longitude coordinates (such as 51320 North, 0 50 West) as well as hierarchical path descriptions (such as London > UK) with respect to a world wide-coverage, geographic taxonomy constructed by combining several large, but noisy gazetteers. This corpus contains news stories and comprises two sub-corpora, a subset of the REUTERS RCV1 news corpus used for the CoNLL shared task (Tjong Kim Sang and De Meulder (2003)), and a subset of the Fourth Message Understanding Contest (MUC-4; Chinchor (1995)), both available pre-annotated with gold-standard. This corpus will be made available as a reference evaluation resource; • a new method and implemented system to resolve toponyms that is capable of robustly processing unseen text (open-domain online newswire text) and grounding toponym instances in an extensional model using longitude and latitude coordinates and hierarchical path descriptions, using internal (textual) and external (gazetteer) evidence; • an empirical analysis of the relative utility of various heuristic biases and other sources of evidence with respect to the toponym resolution task when analysing free news genre text; • a comparison between a replicated method as described in the literature, which functions as a baseline, and a novel algorithm based on minimality heuristics; and • several exemplary prototypical applications to show how the resulting toponym resolution methods can be used to create visual surrogates for news stories, a geographic exploration tool for news browsing, geographically-aware document retrieval and to answer spatial questions (How far...?) in an open-domain question answering system. These applications only have demonstrative character, as a thorough quantitative, task-based (extrinsic) evaluation of the utility of automatic toponym resolution is beyond the scope of this thesis and left for future work

    Contribution à la définition de modèles de recherche d'information flexibles basés sur les CP-Nets

    Get PDF
    This thesis addresses two main problems in IR: automatic query weighting and document semantic indexing. Our global contribution consists on the definition of a theoretical flexible information retrieval (IR) model based on CP-Nets. The CP-Net formalism is used for the graphical representation of flexible queries expressing qualitative preferences and for automatic weighting of such queries. Furthermore, the CP-Net formalism is used as an indexing language in order to represent document representative concepts and related relations in a roughly compact way. Concepts are identified by projection on WordNet. Concept relations are discovered by means of semantic association rules. A query evaluation mechanism based on CP-Nets graph similarity is also proposed.Ce travail de thèse adresse deux principaux problèmes en recherche d'information : (1) la formalisation automatique des préférences utilisateur, (ou la pondération automatique de requêtes) et (2) l'indexation sémantique. Dans notre première contribution, nous proposons une approche de recherche d'information (RI) flexible fondée sur l'utilisation des CP-Nets (Conditional Preferences Networks). Le formalisme CP-Net est utilisé d'une part, pour la représentation graphique de requêtes flexibles exprimant des préférences qualitatives et d'autre part pour l'évaluation flexible de la pertinence des documents. Pour l'utilisateur, l'expression de préférences qualitatives est plus simple et plus intuitive que la formulation de poids numériques les quantifiant. Cependant, un système automatisé raisonnerait plus simplement sur des poids ordinaux. Nous proposons alors une approche de pondération automatique des requêtes par quantification des CP-Nets correspondants par des valeurs d'utilité. Cette quantification conduit à un UCP-Net qui correspond à une requête booléenne pondérée. Une utilisation des CP-Nets est également proposée pour la représentation des documents dans la perspective d'une évaluation flexible des requêtes ainsi pondéreés. Dans notre seconde contribution, nous proposons une approche d'indexation conceptuelle basée sur les CP-Nets. Nous proposons d'utiliser le formalisme CP-Net comme langage d'indexation afin de représenter les concepts et les relations conditionnelles entre eux d'une manière relativement compacte. Les noeuds du CP-Net sont les concepts représentatifs du contenu du document et les relations entre ces noeuds expriment les associations conditionnelles qui les lient. Notre contribution porte sur un double aspect : d'une part, nous proposons une approche d'extraction des concepts en utilisant WordNet. Les concepts résultants forment les noeuds du CP-Net. D'autre part, nous proposons d'étendre et d'utiliser la technique de règles d'association afin de découvrir les relations conditionnelles entre les concepts noeuds du CP-Nets. Nous proposons enfin un mécanisme d'évaluation des requêtes basé sur l'appariement de graphes (les CP-Nets document et requête en l'occurrence)

    Management of Technological Innovation in Developing and Developed Countries

    Get PDF
    It is widely accepted that technology is one of the forces driving economic growth. Although more and more new technologies have emerged, various evidence shows that their performances were not as high as expected. In both academia and practice, there are still many questions about what technologies to adopt and how to manage these technologies. The 15 articles in this book aim to look into these questions. There are quite many features in this book. Firstly, the articles are from both developed countries and developing countries in Asia, Africa and South and Middle America. Secondly, the articles cover a wide range of industries including telecommunication, sanitation, healthcare, entertainment, education, manufacturing, and financial. Thirdly, the analytical approaches are multi-disciplinary, ranging from mathematical, economic, analytical, empirical and strategic. Finally, the articles study both public and private organizations, including the service industry, manufacturing industry, and governmental organizations. Given its wide coverage and multi-disciplines, the book may be useful for both academic research and practical management

    Computer-assisted audit tools and techniques use: Determinants for individual acceptance

    Get PDF
    During the last fifteen years, several studies on the research topic of Individual Technology Acceptance have been developed, and several new models have been proposed. All these models aim to understand and define determinant contributions for the acceptance of technologies and what the drivers leading to successful adoption are. This dissertation’s main emphasis is to gain an understanding of individual acceptance of Computer-assisted Audit Tools and Techniques (CAATTs) in the context of Portuguese Statutory Auditors. Previous research in other countries, utilizing several and distinct research universes, has informed this work and the definition of the main objectives of the present research. This dissertation has as its main objectives: 1) understanding the tasks in which CAATTs are used; 2) to identify the adoption drivers of CAATTs; 3) to explore the current usage of CAATTs among statutory auditors and 4) to develop a CAATTs adoption model. To reach the objectives two studies were conducted: a qualitative study, supported by interviews to experts, and a quantitative study operationalized by a questionnaire to 110 Portuguese statutory auditors. This latter study was the cornerstone, which allowed testing the CAATTs acceptance model. This dissertation presents significant contributions impacting the various stakeholders: individual Statutory Auditors, Statutory Auditors Firms, The Portuguese Institute of Statutory Auditors, Software houses and higher education.Nos últimos 15 anos, vários estudos de investigação foram desenvolvidos abordando a temática da aceitação individual de tecnologia, tendo sido apresentados novos modelos. Esses modelos têm como objetivo compreender e identificar os determinantes sobre a aceitação de tecnologias que levem a uma adoção bem-sucedida. A principal ênfase da tese é compreender a aceitação individual de “Tecnologias de Informação para Auditoria”, também designadas, em Português, por “Ferramentas Informáticas de Suporte à Auditoria” ou “Técnicas de Auditoria Assistidas por Computador” (CAATT), no contexto dos Revisores Oficiais de Contas em Portugal. Investigações anteriores realizadas em diferentes países, com distintos universos de investigação, inspiraram este trabalho na definição dos principais objetivos da presente pesquisa. A presente dissertação tem como principais objetivos compreender as tarefas em que as CAATT são utilizadas; identificar os fatores de adoção de CAATT; explorar o atual uso de CAATT entre Revisores Oficiais de Contas e desenvolver um modelo de adoção de CAATT. Para alcançar estes objetivos foram realizados dois estudos: um estudo qualitativo e exploratório, apoiado por entrevistas com peritos, e um estudo quantitativo operacionalizado através de um questionário a 110 Revisores Oficiais de Contas portugueses. Este estudo foi a pedra angular que permitiu testar o modelo de aceitação de CAATT. Esta dissertação apresenta contribuições significativas com impactos para os principais stakeholders: Revisores Oficiais de Contas, empresas, Ordem dos Revisores Oficiais de Contas, software houses e instituições de ensino superior
    corecore