2,755 research outputs found

    Tailored information dashboards: A systematic mapping of the literature

    Get PDF
    Information dashboards are extremely useful tools to exploit knowledge. Dashboards enable users to reach insights and to identify patterns within data at-a-glance. However, dashboards present a series of characteristics and configurations that could not be optimal for every user, thus requiring the modification or variation of its features to fulfill specific user requirements. This variation process is usually referred to as customization, personalization or adaptation, depending on how this variation process is achieved. Given the great number of users and the exponential growth of data sources, tailoring an information dashboard is not a trivial task, as several solutions and configurations could arise. To analyze and understand the current state-of-the-art regarding tailored information dashboards, a systematic mapping has been performed. This mapping focus on answering questions regarding how existing dashboard solutions in the literature manage the customization, personalization and/or adaptation of its elements to produce tailored displays

    Science of Digital Libraries(SciDL)

    Get PDF
    Our purpose is to ensure that people and institutions better manage information through digital libraries (DLs). Thus we address a fundamental human and social need, which is particularly urgent in the modern Information (and Knowledge) Age. Our goal is to significantly advance both the theory and state-of-theart of DLs (and other advanced information systems) - thoroughly validating our approach using highly visible testbeds. Our research objective is to leverage our formal, theory-based approach to the problems of defining, understanding, modeling, building, personalizing, and evaluating DLs. We will construct models and tools based on that theory so organizations and individuals can easily create and maintain fully functional DLs, whose components can interoperate with corresponding components of related DLs. This research should be highly meritorious intellectually. We bring together a team of senior researchers with expertise in information retrieval, human-computer interaction, scenario-based design, personalization, and componentized system development and expect to make important contributions in each of those areas. Of crucial import, however, is that we will integrate our prior research and experience to achieve breakthrough advances in the field of DLs, regarding theory, methodology, systems, and evaluation. We will extend the 5S theory, which has identified five key dimensions or onstructs underlying effective DLs: Streams, Structures, Spaces, Scenarios, and Societies. We will use that theory to describe and develop metamodels, models, and systems, which can be tailored to disciplines and/or groups, as well as personalized. We will disseminate our findings as well as provide toolkits as open source software, encouraging wide use. We will validate our work using testbeds, ensuring broad impact. We will put powerful tools into the hands of digital librarians so they may easily plan and configure tailored systems, to support an extensible set of services, including publishing, discovery, searching, browsing, recommending, and access control, handling diverse types of collections, and varied genres and classes of digital objects. With these tools, end-users will for be able to design personal DLs. Testbeds are crucial to validate scientific theories and will be thoroughly integrated into SciDL research and evaluation. We will focus on two application domains, which together should allow comprehensive validation and increase the significance of SciDL's impact on scholarly communities. One is education (through CITIDEL); the other is libraries (through DLA and OCKHAM). CITIDEL deals with content from publishers (e.g, ACM Digital Library), corporate research efforts e.g., CiteSeer), volunteer initiatives (e.g., DBLP, based on the database and logic rogramming literature), CS departments (e.g., NCSTRL, mostly technical reports), educational initiatives (e.g., Computer Science Teaching Center), and universities (e.g., theses and dissertations). DLA is a unit of the Virginia Tech library that virtually publishes scholarly communication such as faculty-edited journals and rare and unique resources including image collections and finding aids from Special Collections. The OCKHAM initiative, calling for simplicity in the library world, emphasizes a three-part solution: lightweightprotocols, component-based development, and open reference models. It provides a framework to research the deployment of the SciDL approach in libraries. Thus our choice of testbeds also will nsure that our research will have additional benefit to and impact on the fields of computing and library and information science, supporting transformations in how we learn and deal with information

    Customizing Experiences for Mobile Virtual Reality

    Get PDF
    A criação manual de conteúdo para um jogo é um processo demorado e trabalhoso que requer um conjunto de habilidades diversi cado (normalmente designers, artistas e programadores) e a gestão de diferentes recursos (hardware e software especializados). Dado que o orçamento, tempo e recursos são frequentemente muito limitados, os projetos poderiam bene ciar de uma solução que permitisse poupar e investir noutros aspectos do desenvolvimento. No contexto desta tese, abordamos este desa o sugerindo a criação de pacotes especí cos para a geração de conteúdo per sonalizável, focados em aplicações de Realidade Virtual (RV) móveis. Esta abordagem divide o problema numa solução com duas facetas: em primeiro lugar, a Geração Procedural de Conteúdo, alcançada através de métodos convencionais e pela utilização inovadora de Grandes Modelos de Lin guagem (normalmente conhecidos por Large Language Models). Em segundo lugar, a Co-Criação de Conteúdo, que enfatiza o desenvolvimento colaborativo de conteúdo. Adicionalmente, dado que este trabalho se foca na compatibilidade com RV móvel, as limitações de hardware associadas a capacetes de RV autónomos (standalone VR Headsets) e formas de as ultrapassar são também abordadas. O conteúdo será gerado utilizando métodos actuais em geração procedural e facilitando a co-criação de conteúdo pelo utilizador. A utilização de ambas estas abordagens resulta em ambi entes, objectivos e conteúdo geral mais re-jogáveis com muito menos desenho. Esta abordagem está actualmente a ser aplicada no desenvolvimento de duas aplicações de RV distintas. A primeira, AViR, destina-se a oferecer apoio psicológico a indivíduos após a perda de uma gravidez. A se gunda, EmotionalVRSystem, visa medir as variações nas respostas emocionais dos participantes induzidas por alterações no ambiente, utilizando tecnologia EEG para leituras precisas

    Automatic generation of software interfaces for supporting decisionmaking processes. An application of domain engineering & machine learning

    Get PDF
    [EN] Data analysis is a key process to foster knowledge generation in particular domains or fields of study. With a strong informative foundation derived from the analysis of collected data, decision-makers can make strategic choices with the aim of obtaining valuable benefits in their specific areas of action. However, given the steady growth of data volumes, data analysis needs to rely on powerful tools to enable knowledge extraction. Information dashboards offer a software solution to analyze large volumes of data visually to identify patterns and relations and make decisions according to the presented information. But decision-makers may have different goals and, consequently, different necessities regarding their dashboards. Moreover, the variety of data sources, structures, and domains can hamper the design and implementation of these tools. This Ph.D. Thesis tackles the challenge of improving the development process of information dashboards and data visualizations while enhancing their quality and features in terms of personalization, usability, and flexibility, among others. Several research activities have been carried out to support this thesis. First, a systematic literature mapping and review was performed to analyze different methodologies and solutions related to the automatic generation of tailored information dashboards. The outcomes of the review led to the selection of a modeldriven approach in combination with the software product line paradigm to deal with the automatic generation of information dashboards. In this context, a meta-model was developed following a domain engineering approach. This meta-model represents the skeleton of information dashboards and data visualizations through the abstraction of their components and features and has been the backbone of the subsequent generative pipeline of these tools. The meta-model and generative pipeline have been tested through their integration in different scenarios, both theoretical and practical. Regarding the theoretical dimension of the research, the meta-model has been successfully integrated with other meta-model to support knowledge generation in learning ecosystems, and as a framework to conceptualize and instantiate information dashboards in different domains. In terms of the practical applications, the focus has been put on how to transform the meta-model into an instance adapted to a specific context, and how to finally transform this later model into code, i.e., the final, functional product. These practical scenarios involved the automatic generation of dashboards in the context of a Ph.D. Programme, the application of Artificial Intelligence algorithms in the process, and the development of a graphical instantiation platform that combines the meta-model and the generative pipeline into a visual generation system. Finally, different case studies have been conducted in the employment and employability, health, and education domains. The number of applications of the meta-model in theoretical and practical dimensions and domains is also a result itself. Every outcome associated to this thesis is driven by the dashboard meta-model, which also proves its versatility and flexibility when it comes to conceptualize, generate, and capture knowledge related to dashboards and data visualizations

    Security Aspects of Printed Electronics Applications

    Get PDF
    Gedruckte Elektronik (Printed Electronics (PE)) ist eine neu aufkommende Technologie welche komplementär zu konventioneller Elektronik eingesetzt wird. Dessen einzigartigen Merkmale führten zu einen starken Anstieg von Marktanteilen, welche 2010 \$6 Milliarden betrugen, \$41 Milliarden in 2019 und in 2027 geschätzt \$153 Milliarden. Gedruckte Elektronik kombiniert additive Technologien mit funktionalen Tinten um elektronische Komponenten aus verschiedenen Materialien direkt am Verwendungsort, kosteneffizient und umweltfreundlich herzustellen. Die dabei verwendeten Substrate können flexibel, leicht, transparent, großflächig oder implantierbar sein. Dadurch können mit gedruckter Elektronik (noch) visionäre Anwendungen wie Smart-Packaging, elektronische Einmalprodukte, Smart Labels oder digitale Haut realisiert werden. Um den Fortschritt von gedruckten Elektronik-Technologien voranzutreiben, basierten die meisten Optimierungen hauptsächlich auf der Erhöhung von Produktionsausbeute, Reliabilität und Performance. Jedoch wurde auch die Bedeutung von Sicherheitsaspekten von Hardware-Plattformen in den letzten Jahren immer mehr in den Vordergrund gerückt. Da realisierte Anwendungen in gedruckter Elektronik vitale Funktionalitäten bereitstellen können, die sensible Nutzerdaten beinhalten, wie zum Beispiel in implantierten Geräten und intelligenten Pflastern zur Gesundheitsüberwachung, führen Sicherheitsmängel und fehlendes Produktvertrauen in der Herstellungskette zu teils ernsten und schwerwiegenden Problemen. Des Weiteren, wegen den charakteristischen Merkmalen von gedruckter Elektronik, wie zum Beispiel additive Herstellungsverfahren, hohe Strukturgröße, wenige Schichten und begrenzten Produktionsschritten, ist gedruckte Hardware schon per se anfällig für hardware-basierte Attacken wie Reverse-Engineering, Produktfälschung und Hardware-Trojanern. Darüber hinaus ist die Adoption von Gegenmaßnahmen aus konventionellen Technologien unpassend und ineffizient, da solche zu extremen Mehraufwänden in der kostengünstigen Fertigung von gedruckter Elektronik führen würden. Aus diesem Grund liefert diese Arbeit eine Technologie-spezifische Bewertung von Bedrohungen auf der Hardware-Ebene und dessen Gegenmaßnahmen in der Form von Ressourcen-beschränkten Hardware-Primitiven, um die Produktionskette und Funktionalitäten von gedruckter Elektronik-Anwendungen zu schützen. Der erste Beitrag dieser Dissertation ist ein vorgeschlagener Ansatz um gedruckte Physical Unclonable Functions (pPUF) zu entwerfen, welche Sicherheitsschlüssel bereitstellen um mehrere sicherheitsrelevante Gegenmaßnahmen wie Authentifizierung und Fingerabdrücke zu ermöglichen. Zusätzlich optimieren wir die multi-bit pPUF-Designs um den Flächenbedarf eines 16-bit-Schlüssels-Generators um 31\% zu verringern. Außerdem entwickeln wir ein Analyse-Framework basierend auf Monte Carlo-Simulationen für pPUFs, mit welchem wir Simulationen und Herstellungs-basierte Analysen durchführen können. Unsere Ergebnisse haben gezeigt, dass die pPUFs die notwendigen Eigenschaften besitzen um erfolgreich als Sicherheitsanwendung eingesetzt zu werden, wie Einzigartigkeit der Signatur und ausreichende Robustheit. Der Betrieb der gedruckten pPUFs war möglich bis zu sehr geringen Betriebsspannungen von nur 0.5 V. Im zweiten Beitrag dieser Arbeit stellen wir einen kompakten Entwurf eines gedruckten physikalischen Zufallsgenerator vor (True Random Number Generator (pTRNG)), welcher unvorhersehbare Schlüssel für kryptographische Funktionen und zufälligen "Authentication Challenges" generieren kann. Der pTRNG Entwurf verbessert Prozess-Variationen unter Verwendung von einer Anpassungsmethode von gedruckten Widerständen, ermöglicht durch die individuelle Konfigurierbarkeit von gedruckten Schaltungen, um die generierten Bits nur von Zufallsrauschen abhängig zu machen, und damit ein echtes Zufallsverhalten zu erhalten. Die Simulationsergebnisse legen nahe, dass die gesamten Prozessvariationen des TRNGs um das 110-fache verbessert werden, und der zufallsgenerierte Bitstream der TRNGs die "National Institute of Standards and Technology Statistical Test Suit"-Tests bestanden hat. Auch hier können wir nachweisen, dass die Betriebsspannungen der TRNGs von mehreren Volt zu nur 0.5 V lagen, wie unsere Charakterisierungsergebnisse der hergestellten TRNGs aufgezeigt haben. Der dritte Beitrag dieser Dissertation ist die Beschreibung der einzigartigen Merkmale von Schaltungsentwurf und Herstellung von gedruckter Elektronik, welche sehr verschieden zu konventionellen Technologien ist, und dadurch eine neuartige Reverse-Engineering (RE)-Methode notwendig macht. Hierfür stellen wir eine robuste RE-Methode vor, welche auf Supervised-Learning-Algorithmen für gedruckte Schaltungen basiert, um die Vulnerabilität gegenüber RE-Attacken zu demonstrieren. Die RE-Ergebnisse zeigen, dass die vorgestellte RE-Methode auf zahlreiche gedruckte Schaltungen ohne viel Komplexität oder teure Werkzeuge angewandt werden kann. Der letzte Beitrag dieser Arbeit ist ein vorgeschlagenes Konzept für eine "one-time programmable" gedruckte Look-up Table (pLUT), welche beliebige digitale Funktionen realisieren kann und Gegenmaßnahmen unterstützt wie Camouflaging, Split-Manufacturing und Watermarking um Attacken auf der Hardware-Ebene zu verhindern. Ein Vergleich des vorgeschlagenen pLUT-Konzepts mit existierenden Lösungen hat gezeigt, dass die pLUT weniger Flächen-bedarf, geringere worst-case Verzögerungszeiten und Leistungsverbrauch hat. Um die Konfigurierbarkeit der vorgestellten pLUT zu verifizieren, wurde es simuliert, hergestellt und programmiert mittels Tintenstrahl-gedruckter elektrisch leitfähiger Tinte um erfolgreich Logik-Gatter wie XNOR, XOR und AND zu realisieren. Die Simulation und Charakterisierungsergebnisse haben die erfolgreiche Funktionalität der pLUT bei Betriebsspannungen von nur 1 V belegt

    Techno-Economic modelling of hybrid renewable mini-grids for rural electrification planning in Sub-Saharan Africa

    Get PDF
    Access to clean, modern energy services is a necessity for sustainable development. The UN Sustainable Development Goals and SE4ALL program commit to the provision of universal access to modern energy services by 2030. However, the latest available figures estimate that 1.1 billion people are living without access to electricity, with over 55% living in Sub-Saharan Africa. Furthermore, 85% live in rural areas, often with challenging terrain, low income and population density; or in countries with severe underinvestment in electricity infrastructure making grid extension unrealistic. Recently, improvements in technology, cost efficiency and new business models have made mini-grids which combine multiple energy technologies in hybrid systems one of the most promising alternatives for electrification off the grid. The International Energy Agency has estimated that up to 350,000 new mini-grids will be required to reach universal access goals by 2030. Given the intermittent and location-dependent nature of renewable energy sources, the evolving costs and performance characteristics of individual technologies, and the characteristics of interacting technologies, detailed system simulation and demand modelling is required to determine the cost optimal combinations of technologies for each-and-every potential mini-grid site. Adding to this are the practical details on the ground such as community electricity demand profiles and distances to the grid or fuel sources, as well asthe social and political contexts,such as unknown energy demand uptake or technology acceptance, national electricity system expansion plans and subsidies or taxes, among others. These can all have significant impacts in deciding the applicability of a mini-grid within that context. The scope of the research and modelling framework presented focuses primarily on meeting the specific energy needs in the sub-Saharan African context. Thus, in being transparent, utilizing freely available software and data as well as aiming to be reproducible, scalable and customizable; the model aims to be fully flexible, staying relevant to other unique contexts and useful in answering unknown future research questions. The techno-economic model implementation presented in this paper simulates hourly mini-grid operation using meteorological data, demand profiles, technology capabilities, and costing data to determine the optimal component sizing of hybrid mini-grids appropriate for rural electrification. The results demonstrate the location, renewable resource, technology cost and performance dependencies on system sizing. The model is applied for the investigation of 15 hypothetical mini-grids sites in different regions of South Africa to validate and demonstrate the model’s capabilities. The effect of technology hybridization and future technology cost reductions on the expected cost of energy and the optimal technology configurations are demonstrated. The modelling results also showed that the combination of hydrogen fuel cell and electrolysers was not an economical energy storage with present day technology costs and performance. Thereafter, the model was used to determine an approximate fuel cell and electrolyser cost target curve up to the year 2030. Ultimately, any research efforts through the application of the model, building on the presented framework, are intended to bridge the science-policy boundary and give credible insight for energy and electrification policies, as well as identifying high impact focus areas for ongoing further research

    Modern Trends in the Automatic Generation of Content for Video Games

    Get PDF
    Attractive and realistic content has always played a crucial role in the penetration and popularity of digital games, virtual environments, and other multimedia applications. Procedural content generation enables the automatization of production of any type of game content including not only landscapes and narratives but also game mechanics and generation of whole games. The article offers a comparative analysis of the approaches to automatic generation of content for video games proposed in last five years. It suggests a new typology of the use of procedurally generated game content comprising of categories structured in three groups: content nature, generation process, and game dependence. Together with two other taxonomies – one of content type and the other of methods for content generation – this typology is used for comparing and discussing some specific approaches to procedural content generation in three promising research directions based on applying personalization and adaptation, descriptive languages, and semantic specifications

    UML class diagrams supporting formalism definition in the Draw-Net Modeling System

    Get PDF
    The Draw-Net Modeling System (DMS) is a customizable framework supporting the design and the solution of models expressed in any graph-based formalism, thanks to an open architecture. During the years, many formalisms (Petri Nets, Bayesian Networks, Fault Trees, etc.) have been included in DMS. A formalism defines all the primitives that can be used in a model (nodes, arcs, properties, etc.) and is stored into XML files. The paper describes a new way to manage formalisms: the user can create a new formalism by drawing a UML Class Diagrams (CD); then the corresponding XML files are automatically generated. If instead the user intends to edit an existing formalism, a "reverse engineering" function generates the CD from the XML files. The CD can be handled inside DMS, and acts an intuitive and graphical "meta-model" to represent the formalism. An application example is presented

    Intentional Technology For Teaching Practice

    Get PDF
    In today’s era, where educational technology is in a near-constant state of evolution, the imperative is not just to adopt technology, but to do so with a defined purpose and strategy. As educators within military education there is a growing need to discern which technological tools and practices align best with our mission and the goals we set for our students. Teaching is more than just transferring knowledge—it’s about fostering environments conducive to growth, critical thinking, and lifelong learning. This e-book contains collective insights, experiences, and reflections from faculty participating in a Faculty Learning Community (FLC) a yearlong, structured, community of practice, engaged in the thoughtful exploration of educational technology topics during the academic year of 2022-2023 at the Air Force Institute of Technology. Whether by leveraging social annotation tools to engage students in reading, formulating effective methods to produce and utilize educational content, innovating with game-based learning, or seamlessly integrating multiple applications for meaningful classroom experiences, our aim is to provide you with insights and actionable guidance for use within your own classrooms
    corecore