11,791 research outputs found

    Digital and Media Literacy: A Plan of Action

    Get PDF
    Outlines a community education movement to implement Knight's 2009 recommendation to enhance digital and media literacy. Suggests local, regional, state, and national initiatives such as teacher education and parent outreach and discusses challenges

    Distributed Hybrid Simulation of the Internet of Things and Smart Territories

    Full text link
    This paper deals with the use of hybrid simulation to build and compose heterogeneous simulation scenarios that can be proficiently exploited to model and represent the Internet of Things (IoT). Hybrid simulation is a methodology that combines multiple modalities of modeling/simulation. Complex scenarios are decomposed into simpler ones, each one being simulated through a specific simulation strategy. All these simulation building blocks are then synchronized and coordinated. This simulation methodology is an ideal one to represent IoT setups, which are usually very demanding, due to the heterogeneity of possible scenarios arising from the massive deployment of an enormous amount of sensors and devices. We present a use case concerned with the distributed simulation of smart territories, a novel view of decentralized geographical spaces that, thanks to the use of IoT, builds ICT services to manage resources in a way that is sustainable and not harmful to the environment. Three different simulation models are combined together, namely, an adaptive agent-based parallel and distributed simulator, an OMNeT++ based discrete event simulator and a script-language simulator based on MATLAB. Results from a performance analysis confirm the viability of using hybrid simulation to model complex IoT scenarios.Comment: arXiv admin note: substantial text overlap with arXiv:1605.0487

    The lifewatch approach to the exploration of distributed species information

    Get PDF
    © 2014 Daniel Fuentes, Nicola Fiore. This paper introduces a new method of automatically extracting, integrating and presenting information regarding species from the most relevant online taxonomic resources. First, the information is extracted and joined using data wrappers and integration solutions. Ten, an analytical tool is used to provide a visual representation of the data. Te information is then integrated into a user friendly content management system. Te proposal has been implemented using data from the Global Biodiversity Information Facility (GBIF), the Catalogue of Life (CoL), the World Register of Marine Species (WoRMS), the Integrated Taxonomic Information System (ITIS) and the Global Names Index (GNI). Te approach improves data quality, avoiding taxonomic and nomenclature errors whilst increasing the availability and accessibility of the information.Peer Reviewe

    Mapping Big Data into Knowledge Space with Cognitive Cyber-Infrastructure

    Full text link
    Big data research has attracted great attention in science, technology, industry and society. It is developing with the evolving scientific paradigm, the fourth industrial revolution, and the transformational innovation of technologies. However, its nature and fundamental challenge have not been recognized, and its own methodology has not been formed. This paper explores and answers the following questions: What is big data? What are the basic methods for representing, managing and analyzing big data? What is the relationship between big data and knowledge? Can we find a mapping from big data into knowledge space? What kind of infrastructure is required to support not only big data management and analysis but also knowledge discovery, sharing and management? What is the relationship between big data and science paradigm? What is the nature and fundamental challenge of big data computing? A multi-dimensional perspective is presented toward a methodology of big data computing.Comment: 59 page

    Real-time data analytic platform

    Get PDF
    Atualmente, o mundo dos dados está a crescer, sobretudo nas áreas de Data Science e Data Engineering. A análise de dados, tem-se tornado cada vez mais relevante para obter um conhecimento mais profundo sobre uma determinada empresa e representa uma oportunidade de negócio, precisamente devido à emergente presença de dados derivados da Inteligência Artificial, Internet of Things (IoT), social media e componentes de software/hardware. De modo a processar, analisar e distribuir estes dados num curto espaço de tempo, o tem ganho popularidade e as plataformas de análise de dados em tempo real começaram a surgir, colocando de lado os tradicionais processamentos de dados por lotes. De facto, para desenvolver uma plataforma de análise de dados, em tempo real ou não, as arquiteturas de Big Data e os seus componentes tornaram-se essenciais. As arquiteturas de Big Data existentes, Lambda e Kappa, são suportadas por vários componentes, oferecendo a oportunidade de explorar as suas funcionalidades para desenvolver plataformas de análise de dados em tempo real. Ao implementar este tipo de soluções, surge, por vezes, a questão sob qual das arquiteturas será a mais adequada a um determinado tipo de negócio. Neste relatório de estágio, é demonstrada a análise e conclusões sobre uma possível correlação entre os tipos de negócio e quais as soluções de análise de dados mais adequadas para os suportar. Ao longo deste documento, é ainda ponderada a possibilidade de desenvolver uma plataforma de análise de dados em tempo real, genérica o suficiente, para ser aplicável em qualquer tipo de negócio, reduzindo significativamente os custos de desenvolvimento e implementação. Neste contexto, são examinadas as arquiteturas Lambda e Kappa, por forma a entender se são suficientemente universais para essa possibilidade ou se é viável uma personalização baseada nos seus componentes. De modo a comprovar se qualquer uma destas arquiteturas de é implementável numa plataforma genérica de análise de dados em tempo real, o relatório também descreve o desenvolvimento de um caso de uso específico baseado na arquitetura Kappa

    Multimodal Composing Across Disciplines: Examining Community College Professors’ Perceptions of Twenty-First Century Literacy Practices

    Get PDF
    Providing a close examination of how professors approach twenty-first century literacy practices and production of multimodal texts, this project focuses on community college professors’ perceptions and expectations of students’ composing abilities pertaining to academic discourse across disciplines. Participants included 24 professors from a variety of disciplines at a large community college. The project examined survey responses, assignment guidelines, course syllabi, course outcomes, and video interviews of five of the 24 participants. Video interviews provided greater insight into participants’ perceptions and expectations. Additionally, research questions targeted course and assignment design, course outcomes, and assessment practices. Data findings suggest that despite access to technology, increased availability of mobile devices (for both instructors and students), and ample information technology support, student production of multimodal texts is occurring minimally at the site in question. Participants appear to struggle with meeting course outcomes and addressing course content when attempting to integrate modes other than written or alphabetic; therefore, they do not actively pursue a multimodal pedagogy. Recognizing the value of integrating digital technologies into course and assignment designs is often challenging for community college instructors who might struggle with understanding the technologies available to them or who do not possess the skills or time to develop technologically advanced courses. However, literacy practices today include producing texts in written, visual, aural or digital modes, all of which encourage the use of digital technologies and production of multimodal texts. Most recent scholarship has not fully examined whether making meaning of and producing multimodal texts is congruent with academic discourse in a community college setting. Indeed, community colleges enroll “43% (7.5 million credit students) of the postsecondary education student population, yet they continue to be the most understudied” (Kater & Levin, 2013, p. ix). Reporting on faculty perceptions across disciplines, this study provides a valuable analysis of the challenges community college professors confront and confirms an interest in developing a multimodal pedagogy, but recognizes that resistance occurs due to limitations in time and ensuring alignment with course outcomes

    A DevOps approach to infrastructure on demand

    Get PDF
    As DevOps grows in importance in companies, there is an increasing interest in automating the process of building and deploying infrastructure, having as an objective reduce the complexity for non DevOps engineers and making it so that infrastructure is less error prone, which is not the case when doing it manually. This work aims to explore how to build a solution that allows to manage infrastructure on demand while supporting specific services that are relevant for git profiles analysis, such as Sonarqube and Jenkins. Firstly, this work starts by introducing its context, the problem that the solution is trying to solve and the methodology used to develop the solution. On the State-of-the-Art various topics are presented in order to give all the information needed to understand the implementation of the solution, including concepts such as DevOps and Automation, while going over specific technologies such as GraphQL, Docker, Terraform and Ansible. A value analysis was also done to explore what are the main concerns for stakeholders when managing their infrastructure and to define the value of the solution being developed. Lastly, the solution was implemented making use of various technologies and with scalability in mind that would allow it to grow in the amount of services supported with minimum changes. The work is interesting for someone that is interested in DevOps, Infrastructure-as-Code and automation in general.Com o crescimento da importância de DevOps em empresas existe um interesse acrescido em automatizar o processo de construir e de dar deploy de infra-estrutura, tendo como objectivo reduzir a complexidade para engenheiros menos proficientes em DevOps, e construir infraestrutura que é menos propensa a erros, o que não acontece quando feito manualmente. Este trabalho visa implementar uma solução capaz de gerir infra-estrutura a pedido e ao mesmo tempo suportar serviços específicos relevantes para a análise de perfis git, como por exemplo Sonarqube e Jenkins. Em primeiro lugar, este trabalho começa por introduzir o seu contexto, o problema que a solução está a tentar resolver e a metodologia utilizada para desenvolver a solução. No estado da arte são apresentados vários tópicos com a finalidade de fornecer toda a informação necessária para compreender a implementação da solução, incluindo conceitos como DevOps e automação, são também exploradas tecnologias específicas como GraphQL, Docker, Terraform e Ansible. Foi também feita uma análise de valor para explorar quais são as principais preocupações das partes interessadas na gestão das infra-estruturas das suas empresas e para definir o valor da solução que está a ser desenvolvida. Finalmente, a solução foi implementada, recorrendo a várias tecnologias e tendo em mente a escalabilidade da solução que permitiria crescer na quantidade de serviços suportados requerendo alterações mínimas. O trabalho é interessante para alguém que esteja interessado em DevOps, Infraestrutura como código e automatização em geral

    p-medicine: a medical informatics platform for integrated large scale heterogeneous patient data

    Get PDF
    Secure access to patient data is becoming of increasing importance, as medical informatics grows in significance, to both assist with population health studies, and patient specific medicine in support of treatment. However, assembling the many different types of data emanating from the clinic is in itself a difficulty, and doing so across national borders compounds the problem. In this paper we present our solution: an easy to use distributed informatics platform embedding a state of the art data warehouse incorporating a secure pseudonymisation system protecting access to personal healthcare data. Using this system, a whole range of patient derived data, from genomics to imaging to clinical records, can be assembled and linked, and then connected with analytics tools that help us to understand the data. Research performed in this environment will have immediate clinical impact for personalised patient healthcare
    corecore