54 research outputs found

    Towards Combining Individual and Collaborative Work Spaces under a Unified E-Portfolio

    Get PDF
    Proceedings of: 11th International Conference on Computational Science and Applications (ICCSA 2011). Santander, Spain, June 20-23, 2011E-portfolios in learning environments have been attributed numerous benefits and their presence has been steadily increasing. And so has the variety of environments in which a student participates. Collaborative learning requires communication and resource sharing among team members. Students may participate in multiple teams throughout a long period of time, sometimes even simultaneously. Conventional eportfolios are oriented toward showcasing individual achievements, but they need to also equally reflect collaborative achievements. The approach described in this paper has the objective of offering students an e-portfolio as a local folder their personal computer containing a combined view of their individual and collaborative work spaces. The content of this folder can be synchronized with a remote server thus achieving resource sharing and publication of a clearly identified set of resources.Work partially funded by the Learn3 project, “Plan Nacional de I+D+I TIN2008- 05163/TSI”, the Consejo Social - Universidad Carlos III de Madrid, the Acción Integrada Ref. DE2009-0051, and the “Emadrid: Investigación y desarrollo de tecnologías para el e-learning en la Comunidad de Madrid” project (S2009/TIC-1650).Publicad

    Semantic model for mining e-learning usage with ontology and meaningful learning characteristics

    Get PDF
    The use of e-learning in higher education institutions is a necessity in the learning process. E-learning accumulates vast amount of usage data which could produce a new knowledge and useful for educators. The demand to gain knowledge from e-learning usage data requires a correct mechanism to extract exact information. Current models for mining e-learning usage have focused on the activities usage but ignored the actions usage. In addition, the models lack the ability to incorporate learning pedagogy, leading to a semantic gap to annotate mining data towards education domain. The other issue raised is the absence of usage recommendation that refers to result of data mining task. This research proposes a semantic model for mining e-learning usage with ontology and meaningful learning characteristics. The model starts by preparing data including activity and action hits. The next step is to calculate meaningful hits which categorized into five namely active, cooperative, constructive, authentic, and intentional. The process continues to apply K-means clustering analysis to group usage data into three clusters. Lastly, the usage data is mapped into ontology and the ontology manager generates the meaningful usage cluster and usage recommendation. The model was experimented with three datasets of distinct courses and evaluated by mapping against the student learning outcomes of the courses. The results showed that there is a positive relationship between meaningful hits and learning outcomes, and there is a positive relationship between meaningful usage cluster and learning outcomes. It can be concluded that the proposed semantic model is valid with 95% of confidence level. This model is capable to mine and gain insight into e-learning usage data and to provide usage recommendation

    Configuration Analysis for Large Scale Feature Models: Towards Speculative-Based Solutions

    Get PDF
    Los sistemas de alta variabilidad son sistemas de software en los que la gestión de la variabilidad es una actividad central. Algunos ejemplos actuales de sistemas de alta variabilidad son el sistema web de gesión de contenidos Drupal, el núcleo de Linux, y las distribuciones Debian de Linux. La configuración en sistemas de alta variabilidad es la selección de opciones de configuración según sus restricciones de configuración y los requerimientos de usuario. Los modelos de características son un estándar “de facto” para modelar las funcionalidades comunes y variables de sistemas de alta variabilidad. No obstante, el elevado número de componentes y configuraciones que un modelo de características puede contener hacen que el análisis manual de estos modelos sea una tarea muy costosa y propensa a errores. Así nace el análisis automatizado de modelos de características con mecanismos y herramientas asistidas por computadora para extraer información de estos modelos. Las soluciones tradicionales de análisis automatizado de modelos de características siguen un enfoque de computación secuencial para utilizar una unidad central de procesamiento y memoria. Estas soluciones son adecuadas para trabajar con sistemas de baja escala. Sin embargo, dichas soluciones demandan altos costos de computación para trabajar con sistemas de gran escala y alta variabilidad. Aunque existan recusos informáticos para mejorar el rendimiento de soluciones de computación, todas las soluciones con un enfoque de computación secuencial necesitan ser adaptadas para el uso eficiente de estos recursos y optimizar su rendimiento computacional. Ejemplos de estos recursos son la tecnología de múltiples núcleos para computación paralela y la tecnología de red para computación distribuida. Esta tesis explora la adaptación y escalabilidad de soluciones para el analisis automatizado de modelos de características de gran escala. En primer lugar, nosotros presentamos el uso de programación especulativa para la paralelización de soluciones. Además, nosotros apreciamos un problema de configuración desde otra perspectiva, para su solución mediante la adaptación y aplicación de una solución no tradicional. Más tarde, nosotros validamos la escalabilidad y mejoras de rendimiento computacional de estas soluciones para el análisis automatizado de modelos de características de gran escala. Concretamente, las principales contribuciones de esta tesis son: • Programación especulativa para la detección de un conflicto mínimo y 1 2 preferente. Los algoritmos de detección de conflictos mínimos determinan el conjunto mínimo de restricciones en conflicto que son responsables de comportamiento defectuoso en el modelo en análisis. Nosotros proponemos una solución para, mediante programación especulativa, ejecutar en paralelo y reducir el tiempo de ejecución de operaciones de alto costo computacional que determinan el flujo de acción en la detección de conflicto mínimo y preferente en modelos de características de gran escala. • Programación especulativa para un diagnóstico mínimo y preferente. Los algoritmos de diagnóstico mínimo determinan un conjunto mínimo de restricciones que, por una adecuada adaptación de su estado, permiten conseguir un modelo consistente o libre de conflictos. Este trabajo presenta una solución para el diagnóstico mínimo y preferente en modelos de características de gran escala mediante la ejecución especulativa y paralela de operaciones de alto costo computacional que determinan el flujo de acción, y entonces disminuir el tiempo de ejecución de la solución. • Completar de forma mínima y preferente una configuración de modelo por diagnóstico. Las soluciones para completar una configuración parcial determinan un conjunto no necesariamente mínimo ni preferente de opciones para obtener una completa configuración. Esta tesis soluciona el completar de forma mínima y preferente una configuración de modelo mediante técnicas previamente usadas en contexto de diagnóstico de modelos de características. Esta tesis evalua que todas nuestras soluciones preservan los valores de salida esperados, y también presentan mejoras de rendimiento en el análisis automatizado de modelos de características con modelos de gran escala en las operaciones descrita

    CULTURAL HERITAGE THROUGH TIME: A CASE STUDY AT HADRIAN’S WALL, UNITED KINGDOM

    Get PDF

    Proceedings of the Second International Workshop on Sustainable Ultrascale Computing Systems (NESUS 2015) Krakow, Poland

    Get PDF
    Proceedings of: Second International Workshop on Sustainable Ultrascale Computing Systems (NESUS 2015). Krakow (Poland), September 10-11, 2015

    A geographical database of Infrastructures in Europe - A contribution to the knowledge base of the LUISA modelling platform

    Get PDF
    Infrastructures are the facilities and systems that provide essential services for the functioning of an organization, city, region, country and therefore society as a whole. Often the term refers to physical facilities which society uses to work effectively such as transport, energy, water, communication networks, but also industrial production facilities, and social facilities such as schools, hospitals and residential areas, or even defence and safety facilities. Some infrastructures are considered ‘critical’ because their destruction or disruption by natural or man-made disasters could compromise significantly the functioning of economy and society and their security. Detailed inventories of infrastructures in Europe are essential for various purposes and applications. These inventories should be as complete as possible, covering ideally all infrastructure typologies and describe both their characteristics and precise location. Geographical Information Systems (GIS) are the most adequate tools to construct and manage geographical databases of infrastructures. Such geo-databases are indispensable to assess risk to infrastructures and draft plans for their protection. In addition, these databases could be used for urban and regional planning and for modelling of land use, transport, energy and economy. The ultimate objective of this work was to produce a geographical database of infrastructures in Europe that is ready to use thus enabling analyses for various purposes and applications at the JRC. Moreover, this work is a contribution to the knowledge base of the Land Use-based Integrated Sustainability Assessment (LUISA) modelling platform, which is used to assess territorial impacts of EU policies and investments. The database was aimed to cover as many sectors as possible, a wide geographical extent (EU28 + EFTA) at high spatial resolution. The work did not aim at producing new data but rather seeking, assembling and preparing data from existing, disparate data sources. In a first stage, the availability of infrastructure geographical layers within and outside JRC was checked. Data from various open and proprietary sources were collected to build a geo-database storing both the location and key attributes of each infrastructure in vector and raster formats. The assets addressed include transport infrastructures (e.g. roads, railways, ports, and inland waterways), energy (production and transport), industry (heavy industries and water and waste treatment), social (public health and education facilities) and world heritage sites, totalling 37 types or subtypes of infrastructures. A set of factsheets was constructed to describe and map the geographical distribution of infrastructures in Europe (chapter 3 of this report). The geo-database will be maintained and updated whenever appropriate by the JRC and it can be accessed upon request.JRC.H.8-Sustainability Assessmen

    New Metropolitan Perspectives

    Get PDF
    ​This open access book presents the outcomes of the symposium “NEW METROPOLITAN PERSPECTIVES,” held at Mediterranea University, Reggio Calabria, Italy on May 26–28, 2020. Addressing the challenge of Knowledge Dynamics and Innovation-driven Policies Towards Urban and Regional Transition, the book presents a multi-disciplinary debate on the new frontiers of strategic and spatial planning, economic programs and decision support tools in connection with urban–rural area networks and metropolitan centers. The respective papers focus on six major tracks: Innovation dynamics, smart cities and ICT; Urban regeneration, community-led practices and PPP; Local development, inland and urban areas in territorial cohesion strategies; Mobility, accessibility and infrastructures; Heritage, landscape and identity;and Risk management,environment and energy. The book also includes a Special Section on Rhegion United Nations 2020-2030. Given its scope, the book will benefit all researchers, practitioners and policymakers interested in issues concerning metropolitan and marginal areas

    Proceedings of the XIII Global Optimization Workshop: GOW'16

    Get PDF
    [Excerpt] Preface: Past Global Optimization Workshop shave been held in Sopron (1985 and 1990), Szeged (WGO, 1995), Florence (GO’99, 1999), Hanmer Springs (Let’s GO, 2001), Santorini (Frontiers in GO, 2003), San José (Go’05, 2005), Mykonos (AGO’07, 2007), Skukuza (SAGO’08, 2008), Toulouse (TOGO’10, 2010), Natal (NAGO’12, 2012) and Málaga (MAGO’14, 2014) with the aim of stimulating discussion between senior and junior researchers on the topic of Global Optimization. In 2016, the XIII Global Optimization Workshop (GOW’16) takes place in Braga and is organized by three researchers from the University of Minho. Two of them belong to the Systems Engineering and Operational Research Group from the Algoritmi Research Centre and the other to the Statistics, Applied Probability and Operational Research Group from the Centre of Mathematics. The event received more than 50 submissions from 15 countries from Europe, South America and North America. We want to express our gratitude to the invited speaker Panos Pardalos for accepting the invitation and sharing his expertise, helping us to meet the workshop objectives. GOW’16 would not have been possible without the valuable contribution from the authors and the International Scientific Committee members. We thank you all. This proceedings book intends to present an overview of the topics that will be addressed in the workshop with the goal of contributing to interesting and fruitful discussions between the authors and participants. After the event, high quality papers can be submitted to a special issue of the Journal of Global Optimization dedicated to the workshop. [...
    corecore