15 research outputs found

    Improvements of Decision Support Systems for Public Administrations via a Mechanism of Co-creation of Value

    Get PDF
    This paper focuses on a possible improvement of knowledge-based decision support systems for human resource management within Public Administrations, using a co-creation of value's mechanism, according to the Service-Dominant Logic (SDL) paradigm. In particular, it applies ontology-driven data entry procedures to trigger the cooperation between the Public Administration itself and its employees. Advantages in such sense are evident: constraining the data entry process by means of the term definition ontology improves the quality of gathered data, thus reducing potential mismatching problems and allowing a suitable skill gap analysis among real and ideal workers competence profiles. The procedure foresees the following steps: analyzing organograms and job descriptions; modelling Knowledge, Skills and Attitudes (KSA) for job descriptions; transforming KSAs of job descriptions into a standard-based model with integrations of other characteristics; extracting information from Curricula Vitae according to the selected model; comparing profiles and roles played by the employees. The 'a priori' ontology-driven approach adequately supports the operations that involve both the Public Administration and employees, as for the data storage of job descriptions and curricula vitae. The comparison step is useful to understand if employees perform roles that are coherent with their own professional profiles. The proposed approach has been experimented on a small test case and the results show that its objective evaluation represents an improvement for a decision support system for the re-organization of Italian Public Administrations where, unfortunately often, people are engaged in activities that are not so close to their competences

    Personalisation in MOOCs: a critical literature review

    No full text
    The advent and rise of Massive Open Online Courses (MOOCs) have brought many issues to the area of educational technology. Researchers in the field have been addressing these issues such as pedagogical quality of MOOCs, high attrition rates, and sustainability of MOOCs. However, MOOCs personalisation has not been subject of the wide discussions around MOOCs. This paper presents a critical literature survey and analysis of the available literature on personalisation in MOOCs to identify the needs, the current states and efforts to personalise learning in MOOCs. The findings illustrate that there is a growing attention to personalisation to improve learners’ individual learning experiences in MOOCs. In order to implement personalised services, personalised learning path, personalised assessment and feedback, personalised forum thread and recommendation service for related learning materials or learning tasks are commonly applied

    OGRS2012 Symposium Proceedings

    Get PDF
    Do you remember the Open Source Geospatial Research and Education Symposium (OGRS) in Nantes? "Les Machines de l’Île", the Big Elephant, the "Storm Boat" with Claramunt, Petit et al. (2009), and "le Biniou et la Bombarde"? A second edition of OGRS was promised, and that promise is now fulfilled in OGRS 2012, Yverdon-les-Bains, Switzerland, October 24-26, 2012. OGRS is a meeting dedicated to sharing knowledge, new solutions, methods, practices, ideas and trends in the field of geospatial information through the development and the use of free and open source software in both research and education. In recent years, the development of geospatial free and open source software (GFOSS) has breathed new life into the geospatial domain. GFOSS has been extensively promoted by FOSS4G events, which evolved from meetings which gathered together interested GFOSS development communities to a standard business conference. More in line with the academic side of the FOSS4G conferences, OGRS is a rather neutral forum whose goal is to assemble a community whose main concern is to find new solutions by sharing knowledge and methods free of software license limits. This is why OGRS is primarily concerned with the academic world, though it also involves public institutions, organizations and companies interested in geospatial innovation. This symposium is therefore not an exhibition for presenting existing industrial software solutions, but an event we hope will act as a catalyst for research and innovation and new collaborations between research teams, public agencies and industries. An educational aspect has recently been added to the content of the symposium. This important addition examines the knowledge triangle - research, education, and innovation - through the lens of how open source methods can improve education efficiency. Based on their experience, OGRS contributors bring to the table ideas on how open source training is likely to offer pedagogical advantages to equip students with the skills and knowledge necessary to succeed in tomorrow’s geospatial labor market. OGRS brings together a large collection of current innovative research projects from around the world, with the goal of examining how research uses and contributes to open source initiatives. By presenting their research, OGRS contributors shed light on how the open-source approach impacts research, and vice-versa. The organizers of the symposium wish to demonstrate how the use and development of open source software strengthen education, research and innovation in geospatial fields. To support this approach, the present proceedings propose thirty short papers grouped under the following thematic headings: Education, Earth Science & Landscape, Data, Remote Sensing, Spatial Analysis, Urban Simulation and Tools. These papers are preceded by the contributions of the four keynote speakers: Prof Helena Mitasova, Dr GĂ©rard HĂ©gron, Prof Sergio Rey and Prof Robert Weibel, who share their expertise in research and education in order to highlight the decisive advantages of openness over the limits imposed by the closed-source license system

    Propelling the Potential of Enterprise Linked Data in Austria. Roadmap and Report

    Get PDF
    In times of digital transformation and considering the potential of the data-driven economy, it is crucial that data is not only made available, data sources can be trusted, but also data integrity can be guaranteed, necessary privacy and security mechanisms are in place, and data and access comply with policies and legislation. In many cases, complex and interdisciplinary questions cannot be answered by a single dataset and thus it is necessary to combine data from multiple disparate sources. However, because most data today is locked up in isolated silos, data cannot be used to its fullest potential. The core challenge for most organisations and enterprises in regards to data exchange and integration is to be able to combine data from internal and external data sources in a manner that supports both day to day operations and innovation. Linked Data is a promising data publishing and integration paradigm that builds upon standard web technologies. It supports the publishing of structured data in a semantically explicit and interlinked manner such that it can be easily connected, and consequently becomes more interoperable and useful. The PROPEL project - Propelling the Potential of Enterprise Linked Data in Austria - surveyed technological challenges, entrepreneurial opportunities, and open research questions on the use of Linked Data in a business context and developed a roadmap and a set of recommendations for policy makers, industry, and the research community. Shifting away from a predominantly academic perspective and an exclusive focus on open data, the project looked at Linked Data as an emerging disruptive technology that enables efficient enterprise data management in the rising data economy. Current market forces provide many opportunities, but also present several data and information management challenges. Given that Linked Data enables advanced analytics and decision-making, it is particularly suitable for addressing today's data and information management challenges. In our research, we identified a variety of highly promising use cases for Linked Data in an enterprise context. Examples of promising application domains include "customization and customer relationship management", "automatic and dynamic content production, adaption and display", "data search, information retrieval and knowledge discovery", as well as "data and information exchange and integration". The analysis also revealed broad potential across a large spectrum of industries whose structural and technological characteristics align well with Linked Data characteristics and principles: energy, retail, finance and insurance, government, health, transport and logistics, telecommunications, media, tourism, engineering, and research and development rank among the most promising industries for the adoption of Linked Data principles. In addition to approaching the subject from an industry perspective, we also examined the topics and trends emerging from the research community in the field of Linked Data and the Semantic Web. Although our analysis revolved around a vibrant and active community composed of academia and leading companies involved in semantic technologies, we found that industry needs and research discussions are somewhat misaligned. Whereas some foundation technologies such as knowledge representation and data creation/publishing/sharing, data management and system engineering are highly represented in scientific papers, specific topics such as recommendations, or cross-topics such as machine learning or privacy and security are marginally present. Topics such as big/large data and the internet of things are (still) on an upward trajectory in terms of attention. In contrast, topics that are very relevant for industry such as application oriented topics or those that relate to security, privacy and robustness are not attracting much attention. When it comes to standardisation efforts, we identified a clear need for a more in-depth analysis into the effectiveness of existing standards, the degree of coverage they provide with respect the foundations they belong to, and the suitability of alternative standards that do not fall under the core Semantic Web umbrella. Taking into consideration market forces, sector analysis of Linked Data potential, demand side analysis and the current technological status it is clear that Linked Data has a lot of potential for enterprises and can act as a key driver of technological, organizational, and economic change. However, in order to ensure a solid foundation for Enterprise Linked Data include there is a need for: greater awareness surrounding the potential of Linked Data in enterprises, lowering of entrance barriers via education and training, better alignment between industry demands and research activities, greater support for technology transfer from universities to companies. The PROPEL roadmap recommends concrete measures in order to propel the adoption of Linked Data in Austrian enterprises. These measures are structured around five fields of activities: "awareness and education", "technological innovation, research gaps, standardisation", "policy and legal", and "funding". Key short-term recommendations include the clustering of existing activities in order to raise visibility on an international level, the funding of key topics that are under represented by the community, and the setup of joint projects. In the medium term, we recommend the strengthening of existing academic and private education efforts via certification and to establish flagship projects that are based on national use cases that can serve as blueprints for transnational initiatives. This requires not only financial support, but also infrastructure support, such as data and services to build solutions on top. In the long term, we recommend cooperation with international funding schemes to establish and foster a European level agenda, and the setup of centres of excellence

    Bibliographic Control in the Digital Ecosystem

    Get PDF
    With the contributions of international experts, the book aims to explore the new boundaries of universal bibliographic control. Bibliographic control is radically changing because the bibliographic universe is radically changing: resources, agents, technologies, standards and practices. Among the main topics addressed: library cooperation networks; legal deposit; national bibliographies; new tools and standards (IFLA LRM, RDA, BIBFRAME); authority control and new alliances (Wikidata, Wikibase, Identifiers); new ways of indexing resources (artificial intelligence); institutional repositories; new book supply chain; “discoverability” in the IIIF digital ecosystem; role of thesauri and ontologies in the digital ecosystem; bibliographic control and search engines

    24th International Conference on Information Modelling and Knowledge Bases

    Get PDF
    In the last three decades information modelling and knowledge bases have become essentially important subjects not only in academic communities related to information systems and computer science but also in the business area where information technology is applied. The series of European – Japanese Conference on Information Modelling and Knowledge Bases (EJC) originally started as a co-operation initiative between Japan and Finland in 1982. The practical operations were then organised by professor Ohsuga in Japan and professors Hannu Kangassalo and Hannu Jaakkola in Finland (Nordic countries). Geographical scope has expanded to cover Europe and also other countries. Workshop characteristic - discussion, enough time for presentations and limited number of participants (50) / papers (30) - is typical for the conference. Suggested topics include, but are not limited to: 1. Conceptual modelling: Modelling and specification languages; Domain-specific conceptual modelling; Concepts, concept theories and ontologies; Conceptual modelling of large and heterogeneous systems; Conceptual modelling of spatial, temporal and biological data; Methods for developing, validating and communicating conceptual models. 2. Knowledge and information modelling and discovery: Knowledge discovery, knowledge representation and knowledge management; Advanced data mining and analysis methods; Conceptions of knowledge and information; Modelling information requirements; Intelligent information systems; Information recognition and information modelling. 3. Linguistic modelling: Models of HCI; Information delivery to users; Intelligent informal querying; Linguistic foundation of information and knowledge; Fuzzy linguistic models; Philosophical and linguistic foundations of conceptual models. 4. Cross-cultural communication and social computing: Cross-cultural support systems; Integration, evolution and migration of systems; Collaborative societies; Multicultural web-based software systems; Intercultural collaboration and support systems; Social computing, behavioral modeling and prediction. 5. Environmental modelling and engineering: Environmental information systems (architecture); Spatial, temporal and observational information systems; Large-scale environmental systems; Collaborative knowledge base systems; Agent concepts and conceptualisation; Hazard prediction, prevention and steering systems. 6. Multimedia data modelling and systems: Modelling multimedia information and knowledge; Contentbased multimedia data management; Content-based multimedia retrieval; Privacy and context enhancing technologies; Semantics and pragmatics of multimedia data; Metadata for multimedia information systems. Overall we received 56 submissions. After careful evaluation, 16 papers have been selected as long paper, 17 papers as short papers, 5 papers as position papers, and 3 papers for presentation of perspective challenges. We thank all colleagues for their support of this issue of the EJC conference, especially the program committee, the organising committee, and the programme coordination team. The long and the short papers presented in the conference are revised after the conference and published in the Series of “Frontiers in Artificial Intelligence” by IOS Press (Amsterdam). The books “Information Modelling and Knowledge Bases” are edited by the Editing Committee of the conference. We believe that the conference will be productive and fruitful in the advance of research and application of information modelling and knowledge bases. Bernhard Thalheim Hannu Jaakkola Yasushi Kiyok

    Automatic Generation of SKOS Taxonomies for Generating Topic-Based User Interfaces in MOOCs

    No full text
    The aim of the paper is to provide a framework for the automatic generation of topic-based user interfaces for video lectures in MOOCs. The proposed approach leverages on Fuzzy Formal Concept Analysis and Semantic Technologies, which allow the definition of solutions for supporting learners to navigate the fragments of one or more video lectures, by selecting the topics of interest. The exploitation of a Semantic Web vocabulary, namely SKOS, to model topics, and their relationships, enables the interconnection of different video lectures, also belonging to different MOOCs. The high interoperability al- lowed by Semantic Web technologies enables the integration of different and heterogeneous MOOC Platforms, as well as other Open Repositories. This as- pect fosters the capability of learners to self-regulate and to enhance their learn- ing paths in new forms of learning experiences based on exploration

    Bioinspired metaheuristic algorithms for global optimization

    Get PDF
    This paper presents concise comparison study of newly developed bioinspired algorithms for global optimization problems. Three different metaheuristic techniques, namely Accelerated Particle Swarm Optimization (APSO), Firefly Algorithm (FA), and Grey Wolf Optimizer (GWO) are investigated and implemented in Matlab environment. These methods are compared on four unimodal and multimodal nonlinear functions in order to find global optimum values. Computational results indicate that GWO outperforms other intelligent techniques, and that all aforementioned algorithms can be successfully used for optimization of continuous functions

    Experimental Evaluation of Growing and Pruning Hyper Basis Function Neural Networks Trained with Extended Information Filter

    Get PDF
    In this paper we test Extended Information Filter (EIF) for sequential training of Hyper Basis Function Neural Networks with growing and pruning ability (HBF-GP). The HBF neuron allows different scaling of input dimensions to provide better generalization property when dealing with complex nonlinear problems in engineering practice. The main intuition behind HBF is in generalization of Gaussian type of neuron that applies Mahalanobis-like distance as a distance metrics between input training sample and prototype vector. We exploit concept of neuron’s significance and allow growing and pruning of HBF neurons during sequential learning process. From engineer’s perspective, EIF is attractive for training of neural networks because it allows a designer to have scarce initial knowledge of the system/problem. Extensive experimental study shows that HBF neural network trained with EIF achieves same prediction error and compactness of network topology when compared to EKF, but without the need to know initial state uncertainty, which is its main advantage over EKF
    corecore