10 research outputs found

    Histoedu, redes sociales e historia de la educación: el pasado pedagógico desde el presente educativo

    Get PDF
    El presente artículo muestra algunas reflexiones y resultados iniciales a propósito de un proyecto de investigación I+D+i sobre el aprovechamiento de las TIC y las redes sociales para la investigación universitaria de la historia de la educación. Los avances logrados respecto a la mejora de la comunicación docente y científica, la significatividad del aprendizaje y la motivación entre la comunidad académica, nos llevan a defender la necesidad de consolidar lo que hemos venido a denominar la historia de la educación 2.0 propia de la sociedad digital. Las múltiples oportunidades para aprender y construir el conocimiento de manera compartida nos retan a generar conocimiento en red y a colaborar conjuntamente para: dar a conocer nuestras investigaciones, conocer otras formas de trabajar, compartir recursos y experiencias, intercambiar información y trabajar colaborativamente, etc. El convencimiento y la apuesta por este tipo de trabajo en red, nos han llevado a la creación de HistoEdu como un espacio de colaboración científica y pedagógica relativo a la historia de la educación como disciplina. This article presents some preliminary results and considerations about a R&D project about the use of ICTs and social media to do research on educational history. The progress achieved regarding the improvement of social and scientific communication, the significance of the learning process and the motivation shown by the academic community, make us point out the need to consolidate what we have been referring to as history of education 2.0. There are many chances available to learn and to build up knowledge in a shared way, and they challenge us to generate network knowledge and to cooperate in order to: spread our research, get to know other approaches to work, share resources and experiences, exchange information, work collaboratively, etc. We believe in this type of virtual and shared way of working, which has lead us to create HistoEdu, a website to collaborate scientifically and pedagogically regarding the history of education as a disciplin

    Histoedu, social networks and history of education: pedagogical past from a present perspective

    Get PDF
    This article presents some preliminary results and considerations about a R&D project about the use of ICTs and social media to do research on educational history. The progress achieved regarding the improvement of social and scientific communication, the significance of the learning process and the motivation shown by the academic community, make us point out the need to consolidate what we have been referring to as history of education 2.0. There are many chances available to learn and to build up knowledge in a shared way, and they challenge us to generate network knowledge and to cooperate in order to: spread our research, get to know other approaches to work, share resources and experiences, exchange information, work collaboratively, etc. We believe in this type of virtual and shared way of working, which has lead us to create HistoEdu, a website to collaborate scientifically and pedagogically regarding the history of education as a discipline.El presente artículo muestra algunas reflexiones y resultados iniciales a propósito de un proyecto de investigación I+D+i sobre el aprovechamiento de las TIC y las redes sociales para la investigación universitaria de la historia de la educación. Los avances logrados respecto a la mejora de la comunicación docente y científica, la significatividad del aprendizaje y la motivación entre la comunidad académica, nos llevan a defender la necesidad de consolidar lo que hemos venido a denominar la historia de la educación 2.0 propia de la sociedad digital. Las múltiples oportunidades para aprender y construir el conocimiento de manera compartida nos retan a generar conocimiento en red y a colaborar conjuntamente para: dar a conocer nuestras investigaciones, conocer otras formas de trabajar, compartir recursos y experiencias, intercambiar información y trabajar colaborativamente; etc. El convencimiento y la apuesta por este tipo de trabajo en red, nos han llevado a la creación de HistoEdu como un espacio de colaboración científica y pedagógica relativo a la historia de la educación como disciplina

    Report on the Second Workshop on Sustainable Software for Science: Practice and Experiences (WSSSPE2)

    Get PDF
    This technical report records and discusses the Second Workshop on Sustainable Software for Science: Practice and Experiences (WSSSPE2). The report includes a description of the alternative, experimental submission and review process, two workshop keynote presentations, a series of lightning talks, a discussion on sustainability, and five discussions from the topic areas of exploring sustainability; software development experiences; credit & incentives; reproducibility & reuse & sharing; and code testing & code review. For each topic, the report includes a list of tangible actions that were proposed and that would lead to potential change. The workshop recognized that reliance on scientific software is pervasive in all areas of world-leading research today. The workshop participants then proceeded to explore different perspectives on the concept of sustainability. Key enablers and barriers of sustainable scientific software were identified from their experiences. In addition, recommendations with new requirements such as software credit files and software prize frameworks were outlined for improving practices in sustainable software engineering. There was also broad consensus that formal training in software development or engineering was rare among the practitioners. Significant strides need to be made in building a sense of community via training in software and technical practices, on increasing their size and scope, and on better integrating them directly into graduate education programs. Finally, journals can define and publish policies to improve reproducibility, whereas reviewers can insist that authors provide sufficient information and access to data and software to allow them reproduce the results in the paper. Hence a list of criteria is compiled for journals to provide to reviewers so as to make it easier to review software submitted for publication as a “Software Paper.

    Suporte para séries temporais em plataforma e-Science

    Get PDF
    Nos últimos anos têm ganho popularidade soluções de gestão de dados que não seguem a abordagem objeto-relacional tradicional, nos casos em que não é necessário manter transações ACID nem é necessária a utilização de SQL para a interrogação às bases de dados onde são guardados esses dados. As bases de dados NoSQL diferem da abordagem relacional por usarem estruturas de dados chave-valor, coluna, grafo ou documento, e estão a ser cada vez mais usadas em aplicações que tratam a chamada Big data.No domínio de aplicação das Ciências da terra, nomeadamente a utilização de sensores laser (LiDAR) para a análise das condições de vento em instalações de torres eólicas de produção de energia elétrica, são produzidas séries temporais usadas posteriormente por investigadores da área nos seus trabalhos de investigação. Devido à enorme quantidade de informação que é produzida por estes sensores, torna-se ineficiente a utilização de base de dados relacionais para o armazenamento das séries temporais produzidas.O objetivo desta dissertação será analisar as soluções de gestão de dados NoSQL existentes para, posteriormente, aplicar essa abordagem no âmbito do projeto Windscanner.eu. Neste trabalho pretende-se também desenhar, implementar e testar uma plataforma e-Science com suporte a uma API REST de serviços que possa ser usada para carregar ou descarregar séries temporais e uma Aplicação Web para ser usada pelos investigadores do domínio para gerir objetos de investigação.In the last few years database management systems solutions, that do not follow the traditional object-relational approach, have gained popularity in specific cases where it is not necessary to maintain ACID properties or use SQL to query the database. The NoSQL databases differ from relational approach because they use different kinds of structures to store the data like key-value data structures, columns, graphs or documents and they are being increasingly used in applications that deal with the so called Big data.In the scope of Earth Sciences, including the use of laser sensors (LiDAR) for the analysis of wind conditions on wind towers facilities for electricity production, are produced time series later used by researchers in their work research. Due to the huge amount of information that is produced by these sensors, it is inefficient to use relational database management systems to store time series.The first objective of this dissertation is to analyse the existent NoSQL data management solutions and then apply one of these solutions in the Windscanner.eu project. The second objective is to design, implement and test an e-Science platform to support RESTful web services that can be used to download or upload time series and a Web application that will be used by researchers to manage research objects

    Interacting with scientific workflows

    Get PDF

    Generic Metadata Handling in Scientific Data Life Cycles

    Get PDF
    Scientific data life cycles define how data is created, handled, accessed, and analyzed by users. Such data life cycles become increasingly sophisticated as the sciences they deal with become more and more demanding and complex with the coming advent of exascale data and computing. The overarching data life cycle management background includes multiple abstraction categories with data sources, data and metadata management, computing and workflow management, security, data sinks, and methods on how to enable utilization. Challenges in this context are manifold. One is to hide the complexity from the user and to enable seamlessness in using resources to usability and efficiency. Another one is to enable generic metadata management that is not restricted to one use case but can be adapted with limited effort to further ones. Metadata management is essential to enable scientists to save time by avoiding the need for manually keeping track of data, meaning for example by its content and location. As the number of files grows into the millions, managing data without metadata becomes increasingly difficult. Thus, the solution is to employ metadata management to enable the organization of data based on information about it. Previously, use cases tended to only support highly specific or no metadata management at all. Now, a generic metadata management concept is available that can be used to efficiently integrate metadata capabilities with use cases. The concept was implemented within the MoSGrid data life cycle that enables molecular simulations on distributed HPC-enabled data and computing infrastructures. The implementation enables easy-to-use and effective metadata management. Automated extraction, annotation, and indexing of metadata was designed, developed, integrated, and search capabilities provided via a seamless user interface. Further analysis runs can be directly started based on search results. A complete evaluation of the concept both in general and along the example implementation is presented. In conclusion, generic metadata management concept advances the state of the art in scientific date life cycle management

    Discovery of Potential Parallelism in Sequential Programs

    Get PDF
    In the era of multicore processors, the responsibility for performance gains has been shifted onto software developers. Once improvements of the sequential algorithm have been exhausted, software-managed parallelism is the only option left. However, writing parallel code is still difficult, especially when parallelizing sequential code written by someone else. A key task in this process is the identification of suitable parallelization targets in the source code. Parallelism discovery tools help developers to find such targets automatically. Unfortunately, tools that identify parallelism during compilation are usually conservative due to the lack of runtime information, and tools relying on runtime information primarily suffer from high overhead in terms of both time and memory. This dissertation presents a generic framework for parallelism discovery based on dynamic program analysis, supporting various types of parallelism while incurring practically affordable overhead. The framework contains two main components: an efficient data-dependence profiler and a set of parallelism discovery algorithms based on a language-independent concept called Computational Unit. The data-dependence profiler serves as the foundation of the parallelism discovery framework. Traditional dependence profiling approaches introduce a tremendous amount of time and memory overhead. To lower the overhead, current methods limit their scope to the subset of the dependence information needed for the analysis they have been created for, sacrificing generality and discouraging reuse. In contrast, the profiler shown in this thesis addresses the problem via signature-based memory management and a lock-free parallel design. It produces detailed dependences not only for sequential but also for multi-threaded code without causing prohibitive overhead, allowing it to serve as a generic base for various program analysis techniques. Computational Units (CUs) provide a language-independent foundation for parallelism discovery. CUs are computations that follow the read-compute-write pattern. Unlike other concepts, they are not restricted to predefined language constructs. A program is represented as a CU graph, in which vertexes are CUs and edges are data dependences. This allows parallelism to be detected that spreads across multiple language constructs, taking code refactoring into consideration. The parallelism discovery algorithms cover both loop and task parallelism. Results of our experiments show that 1) the efficient data-dependence profiler has a very competitive average slowdown of around 80× with accuracy higher than 99.6%; 2) the framework discovers parallelism with high accuracy, identifying 92.5% of the parallel loops in NAS benchmarks; 3) when parallelizing well-known open-source software following the outputs of the framework, reasonable speedups are obtained. Finally, use cases beyond parallelism discovery are briefly demonstrated to show the generality of the framework

    XSEDE: eXtreme Science and Engineering Discovery Environment Third Quarter 2012 Report

    Get PDF
    The Extreme Science and Engineering Discovery Environment (XSEDE) is the most advanced, powerful, and robust collection of integrated digital resources and services in the world. It is an integrated cyberinfrastructure ecosystem with singular interfaces for allocations, support, and other key services that researchers can use to interactively share computing resources, data, and expertise.This a report of project activities and highlights from the third quarter of 2012.National Science Foundation, OCI-105357

    Refactoring HUBzero for Linked Data

    Get PDF
    The HUBzero cyberinfrastructure provides a virtual research environment that includes a set of tools for web-based, scientific collaboration and a platform for publishing and using resources such as executable software, source code, images, learning modules, videos, documents, and datasets. Released as open source software in 2010, HUBzero has been implemented on a typical LAMP stack (Linux, Apache, MySQL, and PHP) and utilizes the Joomla! content management system. This paper describes the subsequent refactoring of HUBzero to produce and expose Linked Data from its backend, relational database, altering the external expression of the data without changing its internal structure. The Open Archives Initiative Object Reuse and Exchange (OAI-ORE) specification is applied to model the basic structural semantics of HUBzero resources as Nested Aggregations, and data and metadata are mapped to vocabularies such as Dublin Core and published within the web representations of the resources using RDFa. Resource Maps can be harvested using an RDF crawler or an Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) data provider that were bundled for demonstration purposes. A visualization was produced to browse and navigate the relations among data and metadata from an example hub
    corecore