407 research outputs found

    A holistic approach for semantic-based game generation

    Get PDF
    The Web contains vast sources of content that could be reused to reduce the development time and effort to create games. However, most Web content is unstructured and lacks meaning for machines to be able to process and infer new knowledge. The Web of Data is a term used to describe a trend for publishing and interlinking previously disconnected datasets on the Web in order to make them more valuable and useful as a whole. In this paper, we describe an innovative approach that exploits Semantic Web technologies to automatically generate games by reusing Web content. Existing work on automatic game content generation through algorithmic means focuses primarily on a set of parameters within constrained game design spaces such as terrains or game levels, but does not harness the potential of already existing content on the Web for game generation. We instead propose a holistic and more generally-applicable game generation solution that would identify suitable Web information sources and enrich game content with semantic meta-structures.The research work disclosed in this publication is partially funded by the REACH HIGH Scholars Programme — Post- Doctoral Grants. The grant is part-financed by the European Union, Operational Programme II — Cohesion Policy 2014- 2020 Investing in human capital to create more opportunities and promote the wellbeing of society — European Social Fund.peer-reviewe

    Structuring visual exploratory analysis of skill demand

    No full text
    The analysis of increasingly large and diverse data for meaningful interpretation and question answering is handicapped by human cognitive limitations. Consequently, semi-automatic abstraction of complex data within structured information spaces becomes increasingly important, if its knowledge content is to support intuitive, exploratory discovery. Exploration of skill demand is an area where regularly updated, multi-dimensional data may be exploited to assess capability within the workforce to manage the demands of the modern, technology- and data-driven economy. The knowledge derived may be employed by skilled practitioners in defining career pathways, to identify where, when and how to update their skillsets in line with advancing technology and changing work demands. This same knowledge may also be used to identify the combination of skills essential in recruiting for new roles. To address the challenges inherent in exploring the complex, heterogeneous, dynamic data that feeds into such applications, we investigate the use of an ontology to guide structuring of the information space, to allow individuals and institutions to interactively explore and interpret the dynamic skill demand landscape for their specific needs. As a test case we consider the relatively new and highly dynamic field of Data Science, where insightful, exploratory data analysis and knowledge discovery are critical. We employ context-driven and task-centred scenarios to explore our research questions and guide iterative design, development and formative evaluation of our ontology-driven, visual exploratory discovery and analysis approach, to measure where it adds value to users’ analytical activity. Our findings reinforce the potential in our approach, and point us to future paths to build on

    DATUS: Dashboard Assessment Usability Model: A case study with student dashboards

    Get PDF
    The software market sees the appearance of new companies and products every day. This growth translates into the competition, and the survival of companies is reduced to investment in their products. Universities are also interested in improving their product, education. This improvement can be achieved by investing in the learning experience of students. Usability and user experience play an important role and have been a competitive advantage worth investing. Consequently, new methods have emerged to improve the process of evaluating the usability of products. Despite this growth, there is no direct model for assessing the usability of a dashboard. This gap led to the investigation of this dissertation, a proposal for a new model, Dashboard Assessment Usability Model (DATUS), accompanied by an evaluation method, which can be applied to the evaluation of the usability of dashboards. Eight usability dimensions are included in DATUS, each corresponding to a specific usability facet that has been identified in an existing standard or model and decomposed into a total of 20 metrics. In this sense, to verify if the model created is feasible, and as a contribution to Iscte - Instituto Universitário de Lisboa, a prototype dashboard was designed for the Fénix platform, to which the DATUS model was applied. To test the usability of the dashboards, a behavioural study was conducted with 30 Iscte students. After analysing the results, not only was the feasibility of the proposed model and method confirmed, but positive conclusions were also reached regarding the usability of the prototype.O mercado de software observa o aparecimento de novas empresas e produtos todos os dias. Este crescimento traduz-se em competição e a sobrevivência das empresas resume-se ao investimento nos seus produtos. Também as universidades têm interesse em melhorar o seu produto, o ensino. Esta melhoria pode ser alcançada através de investimento na experiência de aprendizagem dos estudantes. A usabilidade e a experiência do utilizador desempenham um papel importante e demonstram ser uma vantagem competitiva em que vale a pena investir. Consequentemente, têm surgido novos métodos para melhorar o processo de avaliação de usabilidade. Apesar deste crescimento, não existe um modelo claro para avaliar a usabilidade de um dashboard. Esta lacuna levou à investigação desta dissertação, uma proposta de um novo modelo, Dashboard Assessment Usability Model (DATUS), acompanhado por um método de avaliação, que pode ser aplicado à avaliação da usabilidade de dashboards. Estão incluídas no DATUS oito dimensões de usabilidade, cada uma corresponde a uma faceta específica de usabilidade que foi identificada numa normalização ou modelo existente, e decompõem-se num total de 20 métricas. Para verificar se o modelo é viável, e como contribuição para o Iscte - Instituto Universitário de Lisboa, foi desenhado um protótipo de dashboard para a plataforma Fénix, à qual o modelo DATUS foi aplicado. Para testar a usabilidade dos dashboards, foi realizado um estudo comportamental com 30 alunos do Iscte. Após a análise dos resultados, foi confirmada a viabilidade do modelo e do método propostos e retiraram-se conclusões positivas em relação à usabilidade do protótipo

    Exploratory sequential data analysis of user interaction in contemporary BIM applications

    Get PDF
    Creation oriented software allows the user to work according to their own vision and rules. From the perspective of software analysis, this is challenging because there is no certainty as to how the users are using the software and what kinds of workflows emerge among different users. The aim of this thesis was to study and identify the potential of sequential event pattern data extraction analysis from expert field creation oriented software in the field of Building Information Modeling (BIM). The thesis additionally introduces a concept evaluation model for detecting repetition based usability disruption. Finally, the work presents an implementation of sequential pattern mining based user behaviour analysis and machine learning predictive application using state of the art algorithms. The thesis introduces a data analysis implementation that is built upon inspections of Sequential or Exploratory Sequential Data Analysis (SDA or ESDA) based theory in usability studies. The study implements a test application specific workflow sequence detection and database transfer approach. The paper uses comparative modern mining algorithms known as BIDE and TKS for sequential pattern discovery. Finally, the thesis utilizes the created sequence database to create user detailing workflow predictions using a CPT+ algorithm. The main contribution of the thesis outcome is to open scalable options for both software usability and product development to automatically recognize and predict usability and workflow related information, deficiencies and repetitive workflow. By doing this, more quantifiable metrics can be revealed in relation to software user interface behavior analytics.Luomiseen perustuva ohjelmisto mahdollistaa käyttäjän työskentelyn oman visionsa ja sääntöjensä mukaisesti. Ohjelmien analysoinnin kannalta tämä on haastavaa, koska ei ole varmuutta siitä, kuinka ohjelmistoa tarkalleen käytetään ja millaisia työskentelytapoja ohjelmiston eri käyttäjäryhmille voi syntyä. Opinnäytetyön tavoitteena oli tutkia ja identifioida toistuvien käyttäjätapahtumasekvenssien analyysipotentiaalia tietomallinnukseen keskittyvässä luomispoh jaisessa ohjelmistossa. Opinnäyte esittelee myös evaluointimallikonseptin, jonka avulla on mahdollista tunnistaa toistuvasta käyttäytymisestä aiheutuvia käytettävyysongelmia. Lopuksi työssä esitellään sekvenssianalyysiin perustuva ohjelmiston käyttäjän toiminta-analyysi sekä ennustava koneoppimisen sovellus. Opinnäytetyössä esitellään data-analyysisovellus, joka perustuu käytettävyystutkimuksessa esiintyvien toistuvien sekvenssien tai kokeellisesti toistuvien sekvenssien analyysiteorian tarkasteluun. Sovelluksen toteutus on tehty eritoten työssä käytetylle ohjelmistolle, jossa käyttäjän detaljointitapahtumista muodostetaan sekvenssejä sekvenssitietokannan luomiseksi. Työssä käytetään sekvenssien toistuvuusanalyysiin moderneja louhintamenetelmiä nimeltään BIDE ja TKS. Lopuksi työssä hyödynnetään luotua sekvenssitietokantaa myös käyttäjän detaljointityön ennustamista varten käyttämällä CPT+ algoritmia. Opinnäytetyön tulosten pohjalta pyritään löytämään vaihtoehtoja käytettävyyden ja tuotekehityksen päätöksenteon tietopohjaiseksi tueksi tunnistamalla ja ennusta malla käyttäjien toimintaa ohjelmistossa. Löydetyn informaation avulla on mahdollista ilmaista käytettävyyteen liittyviä ongelmia kvantitatiivisen tiedon valossa

    A Geographical Approach for Integrating Belief Networks and Geographic Information Sciences to Probabilistically Predict River Depth

    Get PDF
    Geography is, traditionally, a discipline dedicated to answering complex spatial questions. Although spatial statistical techniques, such as weighted regressions and weighted overlay analyses, are commonplace within geographical sciences, probabilistic reasoning, and uncertainty analyses are not typical. For example, belief networks are statistically robust and computationally powerful, but are not strongly integrated into geographic information systems. This is one of the reasons that belief networks have not been more widely utilized within the environmental sciences community. Geography’s traditional method of delivering information through maps provides a mechanism for conveying probabilities and uncertainties to decision makers in a clear, concise manner. This study will couple probabilistic methods with Geographic Information Sciences (GISc), resulting in a practical decision system framework. While the methods for building the decision system in this study are focused on the identification of environmental navigation hazards, the decision system framework concept is not bound by this study and can be applied to other complex environmental questions

    Software defect prediction using maximal information coefficient and fast correlation-based filter feature selection

    Get PDF
    Software quality ensures that applications that are developed are failure free. Some modern systems are intricate, due to the complexity of their information processes. Software fault prediction is an important quality assurance activity, since it is a mechanism that correctly predicts the defect proneness of modules and classifies modules that saves resources, time and developers’ efforts. In this study, a model that selects relevant features that can be used in defect prediction was proposed. The literature was reviewed and it revealed that process metrics are better predictors of defects in version systems and are based on historic source code over time. These metrics are extracted from the source-code module and include, for example, the number of additions and deletions from the source code, the number of distinct committers and the number of modified lines. In this research, defect prediction was conducted using open source software (OSS) of software product line(s) (SPL), hence process metrics were chosen. Data sets that are used in defect prediction may contain non-significant and redundant attributes that may affect the accuracy of machine-learning algorithms. In order to improve the prediction accuracy of classification models, features that are significant in the defect prediction process are utilised. In machine learning, feature selection techniques are applied in the identification of the relevant data. Feature selection is a pre-processing step that helps to reduce the dimensionality of data in machine learning. Feature selection techniques include information theoretic methods that are based on the entropy concept. This study experimented the efficiency of the feature selection techniques. It was realised that software defect prediction using significant attributes improves the prediction accuracy. A novel MICFastCR model, which is based on the Maximal Information Coefficient (MIC) was developed to select significant attributes and Fast Correlation Based Filter (FCBF) to eliminate redundant attributes. Machine learning algorithms were then run to predict software defects. The MICFastCR achieved the highest prediction accuracy as reported by various performance measures.School of ComputingPh. D. (Computer Science

    A Usability Approach to Improving the User Experience in Web Directories

    Get PDF
    Submitted for the degree of Doctor of Philosophy, Queen Mary, University of Londo
    corecore