1,055 research outputs found

    Toward a collective intelligence recommender system for education

    Get PDF
    The development of Information and Communication Technology (ICT), have revolutionized the world and have moved us into the information age, however the access and handling of this large amount of information is causing valuable time losses. Teachers in Higher Education especially use the Internet as a tool to consult materials and content for the development of the subjects. The internet has very broad services, and sometimes it is difficult for users to find the contents in an easy and fast way. This problem is increasing at the time, causing that students spend a lot of time in search information rather than in synthesis, analysis and construction of new knowledge. In this context, several questions have emerged: Is it possible to design learning activities that allow us to value the information search and to encourage collective participation?. What are the conditions that an ICT tool that supports a process of information search has to have to optimize the student's time and learning? This article presents the use and application of a Recommender System (RS) designed on paradigms of Collective Intelligence (CI). The RS designed encourages the collective learning and the authentic participation of the students. The research combines the literature study with the analysis of the ICT tools that have emerged in the field of the CI and RS. Also, Design-Based Research (DBR) was used to compile and summarize collective intelligence approaches and filtering techniques reported in the literature in Higher Education as well as to incrementally improving the tool. Several are the benefits that have been evidenced as a result of the exploratory study carried out. Among them the following stand out: • It improves student motivation, as it helps you discover new content of interest in an easy way. • It saves time in the search and classification of teaching material of interest. • It fosters specialized reading, inspires competence as a means of learning. • It gives the teacher the ability to generate reports of trends and behaviors of their students, real-time assessment of the quality of learning material. The authors consider that the use of ICT tools that combine the paradigms of the CI and RS presented in this work, are a tool that improves the construction of student knowledge and motivates their collective development in cyberspace, in addition, the model of Filltering Contents used supports the design of models and strategies of collective intelligence in Higher Education.Postprint (author's final draft

    Visual analytics: The role of design and art in the emerging field of big data

    Get PDF
    Driven by the increasing complexity of data sets the need for sophisticated analytics algorithms coupled with visualization of both data and information is growing exponentially in every discipline and industry. Artists, designers and visual thinkers have an important role to play in the presentation and interpretation of data. The Visual Analytics Lab (VAL) at OCAD University is a preeminent research lab for innovation and training in information and scientific visualization and visual analytics. As well as its perspective on the field, two brief case studies are provided, one for health care and the second for media navigation and analysi

    Internet of things

    Get PDF
    Manual of Digital Earth / Editors: Huadong Guo, Michael F. Goodchild, Alessandro Annoni .- Springer, 2020 .- ISBN: 978-981-32-9915-3Digital Earth was born with the aim of replicating the real world within the digital world. Many efforts have been made to observe and sense the Earth, both from space (remote sensing) and by using in situ sensors. Focusing on the latter, advances in Digital Earth have established vital bridges to exploit these sensors and their networks by taking location as a key element. The current era of connectivity envisions that everything is connected to everything. The concept of the Internet of Things(IoT)emergedasaholisticproposaltoenableanecosystemofvaried,heterogeneous networked objects and devices to speak to and interact with each other. To make the IoT ecosystem a reality, it is necessary to understand the electronic components, communication protocols, real-time analysis techniques, and the location of the objects and devices. The IoT ecosystem and the Digital Earth (DE) jointly form interrelated infrastructures for addressing today’s pressing issues and complex challenges. In this chapter, we explore the synergies and frictions in establishing an efficient and permanent collaboration between the two infrastructures, in order to adequately address multidisciplinary and increasingly complex real-world problems. Although there are still some pending issues, the identified synergies generate optimism for a true collaboration between the Internet of Things and the Digital Earth

    Analysis of Eight Data Mining Algorithms for Smarter Internet of Things (IoT)

    Get PDF
    AbstractInternet of Things (IoT) is set to revolutionize all aspects of our lives. The number of objects connected to IoT is expected to reach 50 billion by 2020, giving rise to an enormous amounts of valuable data. The data collected from the IoT devices will be used to understand and control complex environments around us, enabling better decision making, greater automation, higher efficiencies, productivity, accuracy, and wealth generation. Data mining and other artificial intelligence methods would play a critical role in creating smarter IoTs, albeit with many challenges. In this paper, we examine the applicability of eight well-known data mining algorithms for IoT data. These include, among others, the deep learning artificial neural networks (DLANNs), which build a feed forward multi-layer artificial neural network (ANN) for modelling high-level data abstractions. Our preliminary results on three real IoT datasets show that C4.5 and C5.0 have better accuracy, are memory efficient and have relatively higher processing speeds. ANNs and DLANNs can provide highly accurate results but are computationally expensive

    CHORUS Deliverable 2.2: Second report - identification of multi-disciplinary key issues for gap analysis toward EU multimedia search engines roadmap

    Get PDF
    After addressing the state-of-the-art during the first year of Chorus and establishing the existing landscape in multimedia search engines, we have identified and analyzed gaps within European research effort during our second year. In this period we focused on three directions, notably technological issues, user-centred issues and use-cases and socio- economic and legal aspects. These were assessed by two central studies: firstly, a concerted vision of functional breakdown of generic multimedia search engine, and secondly, a representative use-cases descriptions with the related discussion on requirement for technological challenges. Both studies have been carried out in cooperation and consultation with the community at large through EC concertation meetings (multimedia search engines cluster), several meetings with our Think-Tank, presentations in international conferences, and surveys addressed to EU projects coordinators as well as National initiatives coordinators. Based on the obtained feedback we identified two types of gaps, namely core technological gaps that involve research challenges, and “enablers”, which are not necessarily technical research challenges, but have impact on innovation progress. New socio-economic trends are presented as well as emerging legal challenges

    Re-examining and re-conceptualising enterprise search and discovery capability: towards a model for the factors and generative mechanisms for search task outcomes.

    Get PDF
    Many organizations are trying to re-create the Google experience, to find and exploit their own corporate information. However, there is evidence that finding information in the workplace using search engine technology has remained difficult, with socio-technical elements largely neglected in the literature. Explication of the factors and generative mechanisms (ultimate causes) to effective search task outcomes (user satisfaction, search task performance and serendipitous encountering) may provide a first step in making improvements. A transdisciplinary (holistic) lens was applied to Enterprise Search and Discovery capability, combining critical realism and activity theory with complexity theories to one of the worlds largest corporations. Data collection included an in-situ exploratory search experiment with 26 participants, focus groups with 53 participants and interviews with 87 business professionals. Thousands of user feedback comments and search transactions were analysed. Transferability of findings was assessed through interviews with eight industry informants and ten organizations from a range of industries. A wide range of informational needs were identified for search filters, including a need to be intrigued. Search term word co-occurrence algorithms facilitated serendipity to a greater extent than existing methods deployed in the organization surveyed. No association was found between user satisfaction (or self assessed search expertise) with search task performance and overall performance was poor, although most participants had been satisfied with their performance. Eighteen factors were identified that influence search task outcomes ranging from user and task factors, informational and technological artefacts, through to a wide range of organizational norms. Modality Theory (Cybersearch culture, Simplicity and Loss Aversion bias) was developed to explain the study observations. This proposes that at all organizational levels there are tendencies for reductionist (unimodal) mind-sets towards search capability leading to fixes that fail. The factors and mechanisms were identified in other industry organizations suggesting some theory generalizability. This is the first socio-technical analysis of Enterprise Search and Discovery capability. The findings challenge existing orthodoxy, such as the criticality of search literacy (agency) which has been neglected in the practitioner literature in favour of structure. The resulting multifactorial causal model and strategic framework for improvement present opportunities to update existing academic models in the IR, LIS and IS literature, such as the DeLone and McLean model for information system success. There are encouraging signs that Modality Theory may enable a reconfiguration of organizational mind-sets that could transform search task outcomes and ultimately business performance

    A Knowledge Multidimensional Representation Model for Automatic Text Analysis and Generation: Applications for Cultural Heritage

    Get PDF
    Knowledge is information that has been contextualized in a certain domain, where it can be used and applied. Natural Language provides a most direct way to transfer knowledge at different levels of conceptual density. The opportunity provided by the evolution of the technologies of Natural Language Processing is thus of making more fluid and universal the process of knowledge transfer. Indeed, unfolding domain knowledge is one way to bring to larger audiences contents that would be otherwise restricted to specialists. This has been done so far in a totally manual way through the skills of divulgators and popular science writers. Technology provides now a way to make this transfer both less expensive and more widespread. Extracting knowledge and then generating from it suitably communicable text in natural language are the two related subtasks that need be fulfilled in order to attain the general goal. To this aim, two fields from information technology have achieved the needed maturity and can therefore be effectively combined. In fact, on the one hand Information Extraction and Retrieval (IER) can extract knowledge from texts and map it into a neutral, abstract form, hence liberating it from the stylistic constraints into which it was originated. From there, Natural Language Generation can take charge, by regenerating automatically, or semi-automatically, the extracted knowledge into texts targeting new communities. This doctoral thesis provides a contribution to making substantial this combination through the definition and implementation of a novel multidimensional model for the representation of conceptual knowledge and of a workflow that can produce strongly customized textual descriptions. By exploiting techniques for the generation of paraphrases and by profiling target users, applications and domains, a target-driven approach is proposed to automatically generate multiple texts from the same information core. An extended case study is described to demonstrate the effectiveness of the proposed model and approach in the Cultural Heritage application domain, so as to compare and position this contribution within the current state of the art and to outline future directions
    corecore