9,894 research outputs found

    Toward a collective intelligence recommender system for education

    Get PDF
    The development of Information and Communication Technology (ICT), have revolutionized the world and have moved us into the information age, however the access and handling of this large amount of information is causing valuable time losses. Teachers in Higher Education especially use the Internet as a tool to consult materials and content for the development of the subjects. The internet has very broad services, and sometimes it is difficult for users to find the contents in an easy and fast way. This problem is increasing at the time, causing that students spend a lot of time in search information rather than in synthesis, analysis and construction of new knowledge. In this context, several questions have emerged: Is it possible to design learning activities that allow us to value the information search and to encourage collective participation?. What are the conditions that an ICT tool that supports a process of information search has to have to optimize the student's time and learning? This article presents the use and application of a Recommender System (RS) designed on paradigms of Collective Intelligence (CI). The RS designed encourages the collective learning and the authentic participation of the students. The research combines the literature study with the analysis of the ICT tools that have emerged in the field of the CI and RS. Also, Design-Based Research (DBR) was used to compile and summarize collective intelligence approaches and filtering techniques reported in the literature in Higher Education as well as to incrementally improving the tool. Several are the benefits that have been evidenced as a result of the exploratory study carried out. Among them the following stand out: • It improves student motivation, as it helps you discover new content of interest in an easy way. • It saves time in the search and classification of teaching material of interest. • It fosters specialized reading, inspires competence as a means of learning. • It gives the teacher the ability to generate reports of trends and behaviors of their students, real-time assessment of the quality of learning material. The authors consider that the use of ICT tools that combine the paradigms of the CI and RS presented in this work, are a tool that improves the construction of student knowledge and motivates their collective development in cyberspace, in addition, the model of Filltering Contents used supports the design of models and strategies of collective intelligence in Higher Education.Postprint (author's final draft

    ALOJA: A framework for benchmarking and predictive analytics in Hadoop deployments

    Get PDF
    This article presents the ALOJA project and its analytics tools, which leverages machine learning to interpret Big Data benchmark performance data and tuning. ALOJA is part of a long-term collaboration between BSC and Microsoft to automate the characterization of cost-effectiveness on Big Data deployments, currently focusing on Hadoop. Hadoop presents a complex run-time environment, where costs and performance depend on a large number of configuration choices. The ALOJA project has created an open, vendor-neutral repository, featuring over 40,000 Hadoop job executions and their performance details. The repository is accompanied by a test-bed and tools to deploy and evaluate the cost-effectiveness of different hardware configurations, parameters and Cloud services. Despite early success within ALOJA, a comprehensive study requires automation of modeling procedures to allow an analysis of large and resource-constrained search spaces. The predictive analytics extension, ALOJA-ML, provides an automated system allowing knowledge discovery by modeling environments from observed executions. The resulting models can forecast execution behaviors, predicting execution times for new configurations and hardware choices. That also enables model-based anomaly detection or efficient benchmark guidance by prioritizing executions. In addition, the community can benefit from ALOJA data-sets and framework to improve the design and deployment of Big Data applications.This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 639595). This work is partially supported by the Ministry of Economy of Spain under contracts TIN2012-34557 and 2014SGR1051.Peer ReviewedPostprint (published version

    An intelligent information forwarder for healthcare big data systems with distributed wearable sensors

    Get PDF
    © 2016 IEEE. An increasing number of the elderly population wish to live an independent lifestyle, rather than rely on intrusive care programmes. A big data solution is presented using wearable sensors capable of carrying out continuous monitoring of the elderly, alerting the relevant caregivers when necessary and forwarding pertinent information to a big data system for analysis. A challenge for such a solution is the development of context-awareness through the multidimensional, dynamic and nonlinear sensor readings that have a weak correlation with observable human behaviours and health conditions. To address this challenge, a wearable sensor system with an intelligent data forwarder is discussed in this paper. The forwarder adopts a Hidden Markov Model for human behaviour recognition. Locality sensitive hashing is proposed as an efficient mechanism to learn sensor patterns. A prototype solution is implemented to monitor health conditions of dispersed users. It is shown that the intelligent forwarders can provide the remote sensors with context-awareness. They transmit only important information to the big data server for analytics when certain behaviours happen and avoid overwhelming communication and data storage. The system functions unobtrusively, whilst giving the users peace of mind in the knowledge that their safety is being monitored and analysed

    Big Data Risk Assessment the 21st Century approach to safety science

    Get PDF
    Safety Science has been developed over time with notable models in the early 20th Century such as Heinrich’s iceberg model and the Swiss cheese model. Common techniques such fault tree and event tree analyses, HAZOP analysis and bow-ties construction are widely used within industry. These techniques are based on the concept that failures of a system can be caused by deviations or individual faults within a system, combinations of latent failures, or even where each part of a complex system is operating within normal bounds but a combined effect creates a hazardous situation. In this era of Big Data, systems are becoming increasingly complex, producing such a large quantity of data related to safety that cannot be meaningfully analysed by humans to make decisions or uncover complex trends that may indicate the presence of hazards. More subtle and automated techniques for mining these data are required to provide a better understanding of our systems and the environment within which they operate, and insights to hazards that may not otherwise be identified. Big Data Risk Analysis (BDRA) is a suite of techniques being researched to identify the use of non-traditional techniques from big data sources to predict safety risk. This paper describes early trials of BDRA that have been conducted on railway signal information and text-based reports of railway safety near misses and the ongoing research that is looking at combining various data sources to uncover obscured trends that cannot be identified by considering each source individually. The paper also discusses how visual analytics may be a key tool in analysing Big Data to support knowledge elicitation and decision-making, as well as providing information in a form that can be readily interpreted by a variety of audiences

    Dialogue as Data in Learning Analytics for Productive Educational Dialogue

    Get PDF
    This paper provides a novel, conceptually driven stance on the state of the contemporary analytic challenges faced in the treatment of dialogue as a form of data across on- and offline sites of learning. In prior research, preliminary steps have been taken to detect occurrences of such dialogue using automated analysis techniques. Such advances have the potential to foster effective dialogue using learning analytic techniques that scaffold, give feedback on, and provide pedagogic contexts promoting such dialogue. However, the translation of much prior learning science research to online contexts is complex, requiring the operationalization of constructs theorized in different contexts (often face-to-face), and based on different datasets and structures (often spoken dialogue). In this paper, we explore what could constitute the effective analysis of productive online dialogues, arguing that it requires consideration of three key facets of the dialogue: features indicative of productive dialogue; the unit of segmentation; and the interplay of features and segmentation with the temporal underpinning of learning contexts. The paper thus foregrounds key considerations regarding the analysis of dialogue data in emerging learning analytics environments, both for learning-science and for computationally oriented researchers

    Counting Causal Paths in Big Times Series Data on Networks

    Full text link
    Graph or network representations are an important foundation for data mining and machine learning tasks in relational data. Many tools of network analysis, like centrality measures, information ranking, or cluster detection rest on the assumption that links capture direct influence, and that paths represent possible indirect influence. This assumption is invalidated in time-stamped network data capturing, e.g., dynamic social networks, biological sequences or financial transactions. In such data, for two time-stamped links (A,B) and (B,C) the chronological ordering and timing determines whether a causal path from node A via B to C exists. A number of works has shown that for that reason network analysis cannot be directly applied to time-stamped network data. Existing methods to address this issue require statistics on causal paths, which is computationally challenging for big data sets. Addressing this problem, we develop an efficient algorithm to count causal paths in time-stamped network data. Applying it to empirical data, we show that our method is more efficient than a baseline method implemented in an OpenSource data analytics package. Our method works efficiently for different values of the maximum time difference between consecutive links of a causal path and supports streaming scenarios. With it, we are closing a gap that hinders an efficient analysis of big time series data on complex networks.Comment: 10 pages, 2 figure
    • …
    corecore