971 research outputs found

    Social Networks as Learning Environments for Higher Education

    Get PDF
    Learning is considered as a social activity, a student does not learn only of the teacher and the textbook or only in the classroom, learn also from many other agents related to the media, peers and society in general. And since the explosion of the Internet, the information is within the reach of everyone, is there where the main area of opportunity in new technologies applied to education, as well as taking advantage of recent socialization trends that can be leveraged to improve not only informing of their daily practices, but rather as a tool that explore different branches of education research. One can foresee the future of higher education as a social learning environment, open and collaborative, where people construct knowledge in interaction with others, in a comprehensive manner. The mobility and ubiquity that provide mobile devices enable the connection from anywhere and at any time. In modern educational environments can be expected to facilitate mobile devices in the classroom expansion in digital environments, so that students and teachers can build the teaching-learning process collectively, this partial derivative results in the development of draft research approved by the CONADI in “Universidad Cooperativa de Colombia”, "Social Networks: A teaching strategy in learning environments in higher education.

    Efficient approximation of probability distributions with k-order decomposable models

    Get PDF
    During the last decades several learning algorithms have been proposed to learn probability distributions based on decomposable models. Some of these algorithms can be used to search for a maximum likelihood decomposable model with a given maximum clique size, k. Unfortunately, the problem of learning a maximum likelihood decomposable model given a maximum clique size is NP-hard for k > 2. In this work, we propose the fractal tree family of algorithms which approximates this problem with a computational complexity of O(k 2 · n 2 · N ) in the worst case, where n is the number of implied random variables and N is the size of the training set. The fractal tree algorithms construct a sequence of maximal i-order decomposable graphs, for i = 2, ..., k, in k − 1 steps. At each step, the algorithms follow a divide-and-conquer strategy that decomposes the problem into a set of separator problems. Each separator problem is efficiently solved using the generalized Chow-Liu algorithm. Fractal trees can be considered a natural extension of the Chow-Liu algorithm, from k = 2 to arbitrary values of k, and they have shown a competitive behaviour to deal with the maximum likelihood problem. Due to their competitive behavior, their low computational complexity and their modularity, which allow them to implement different parallelization strategies, the proposed procedures are especially advisable for modelling high dimensional domains.Saiotek and IT609-13 programs (Basque Government) TIN2013-41272-P (Spanish Ministry of Science and Innovation) COMBIOMED network in computational bio-medicine (Carlos III Health Institute

    A review on distance based time series classification

    Get PDF
    Time series classification is an increasing research topic due to the vast amount of time series data that is being created over a wide variety of fields. The particularity of the data makes it a challenging task and different approaches have been taken, including the distance based approach. 1-NN has been a widely used method within distance based time series classification due to its simplicity but still good performance. However, its supremacy may be attributed to being able to use specific distances for time series within the classification process and not to the classifier itself. With the aim of exploiting these distances within more complex classifiers, new approaches have arisen in the past few years that are competitive or which outperform the 1-NN based approaches. In some cases, these new methods use the distance measure to transform the series into feature vectors, bridging the gap between time series and traditional classifiers. In other cases, the distances are employed to obtain a time series kernel and enable the use of kernel methods for time series classification. One of the main challenges is that a kernel function must be positive semi-definite, a matter that is also addressed within this review. The presented review includes a taxonomy of all those methods that aim to classify time series using a distance based approach, as well as a discussion of the strengths and weaknesses of each method.TIN2016-78365-

    Characterising the rankings produced by combinatorial optimisation problems and finding their intersections

    Get PDF
    The aim of this paper is to introduce the concept of intersection between combinatorial optimisation problems. We take into account that most algorithms, in their machinery, do not consider the exact objective function values of the solutions, but only a comparison between them. In this sense, if the solutions of an instance of a combinatorial optimisation problem are sorted into their objective function values, we can see the instances as (partial) rankings of the solutions of the search space. Working with specific problems, particularly, the linear ordering problem and the symmetric and asymmetric traveling salesman problem, we show that they can not generate the whole set of (partial) rankings of the solutions of the search space, but just a subset. First, we characterise the set of (partial) rankings each problem can generate. Secondly, we study the intersections between these problems: those rankings which can be generated by both the linear ordering problem and the symmetric/asymmetric traveling salesman problem, respectively. The fact of finding large intersections between problems can be useful in order to transfer heuristics from one problem to another, or to define heuristics that can be useful for more than one problem

    Anatomy of the attraction basins: Breaking with the intuition

    Get PDF
    olving combinatorial optimization problems efficiently requires the development of algorithms that consider the specific properties of the problems. In this sense, local search algorithms are designed over a neighborhood structure that partially accounts for these properties. Considering a neighborhood, the space is usually interpreted as a natural landscape, with valleys and mountains. Under this perception, it is commonly believed that, if maximizing, the solutions located in the slopes of the same mountain belong to the same attraction basin, with the peaks of the mountains being the local optima. Unfortunately, this is a widespread erroneous visualization of a combinatorial landscape. Thus, our aim is to clarify this aspect, providing a detailed analysis of, first, the existence of plateaus where the local optima are involved, and second, the properties that define the topology of the attraction basins, picturing a reliable visualization of the landscapes. Some of the features explored in this article have never been examined before. Hence, new findings about the structure of the attraction basins are shown. The study is focused on instances of permutation-based combinatorial optimization problems considering the 2-exchange and the insert neighborhoods. As a consequence of this work, we break away from the extended belief about the anatomy of attraction basins

    A cheap feature selection approach for the K -means algorithm

    Get PDF
    The increase in the number of features that need to be analyzed in a wide variety of areas, such as genome sequencing, computer vision or sensor networks, represents a challenge for the K-means algorithm. In this regard, different dimensionality reduction approaches for the K-means algorithm have been designed recently, leading to algorithms that have proved to generate competitive clusterings. Unfortunately, most of these techniques tend to have fairly high computational costs and/or might not be easy to parallelize. In this work, we propose a fully-parellelizable feature selection technique intended for the K-means algorithm. The proposal is based on a novel feature relevance measure that is closely related to the K-means error of a given clustering. Given a disjoint partition of the features, the technique consists of obtaining a clustering for each subset of features and selecting the m features with the highest relevance measure. The computational cost of this approach is just O(m · max{n · K, log m}) per subset of features. We additionally provide a theoretical analysis on the quality of the obtained solution via our proposal, and empirically analyze its performance with respect to well-known feature selection and feature extraction techniques. Such an analysis shows that our proposal consistently obtains results with lower K-means error than all the considered feature selection techniques: Laplacian scores, maximum variance, multi-cluster feature selection and random selection, while also requiring similar or lower computational times than these approaches. Moreover, when compared to feature extraction techniques, such as Random Projections, the proposed approach also shows a noticeable improvement in both error and computational time.BERC 2014-201

    EDA++: Estimation of Distribution Algorithms with Feasibility Conserving Mechanisms for Constrained Continuous Optimization

    Get PDF
    Handling non-linear constraints in continuous optimization is challenging, and finding a feasible solution is usually a difficult task. In the past few decades, various techniques have been developed to deal with linear and non-linear constraints. However, reaching feasible solutions has been a challenging task for most of these methods. In this paper, we adopt the framework of Estimation of Distribution Algorithms (EDAs) and propose a new algorithm (EDA++) equipped with some mechanisms to deal with non-linear constraints. These mechanisms are associated with different stages of the EDA, including seeding, learning and mapping. It is shown that, besides increasing the quality of the solutions in terms of objective values, the feasibility of the final solutions is guaranteed if an initial population of feasible solutions is seeded to the algorithm. The EDA with the proposed mechanisms is applied to two suites of benchmark problems for constrained continuous optimization and its performance is compared with some state-of-the-art algorithms and constraint handling methods. Conducted experiments confirm the speed, robustness and efficiency of the proposed algorithm in tackling various problems with linear and non-linear constraints.La Caixa Foundatio

    Exploring Gaps in DeepFool inSearch of More Effective Adversarial Perturbations

    Get PDF
    Adversarial examples are inputs subtly perturbed to produce a wrong prediction in machine learning models, while remaining perceptually similar to the original input. To find adversarial examples, some attack strategies rely on linear approximations of different properties of the models. This opens a number of questions related to the accuracy of such approximations. In this paper we focus on DeepFool, a state-of-the-art attack algorithm, which is based on efficiently approximating the decision space of the target classifier to find the minimal perturbation needed to fool the model. The objective of this paper is to analyze the feasibility of finding inaccuracies in the linear approximation of DeepFool, with the aim of studying whether they can be used to increase the effectiveness of the attack. We introduce two strategies to efficiently explore gaps in the approximation of the decision boundaries, and evaluate our approach in a speech command classification task.IT1244-19 PRE_2019_1_0128 TIN2016-78365-R PID2019-104966GB-I00 FPU19/0323

    Probabilistic Load Forecasting Based on Adaptive Online Learning

    Get PDF
    Load forecasting is crucial for multiple energy management tasks such as scheduling generation capacity, planning supply and demand, and minimizing energy trade costs. Such relevance has increased even more in recent years due to the integration of renewable energies, electric cars, and microgrids. Conventional load forecasting techniques obtain singlevalue load forecasts by exploiting consumption patterns of past load demand. However, such techniques cannot assess intrinsic uncertainties in load demand, and cannot capture dynamic changes in consumption patterns. To address these problems, this paper presents a method for probabilistic load forecasting based on the adaptive online learning of hidden Markov models. We propose learning and forecasting techniques with theoretical guarantees, and experimentally assess their performance in multiple scenarios. In particular, we develop adaptive online learning techniques that update model parameters recursively, and sequential prediction techniques that obtain probabilistic forecasts using the most recent parameters. The performance of the method is evaluated using multiple datasets corresponding with regions that have different sizes and display assorted time-varying consumption patterns. The results show that the proposed method can significantly improve the performance of existing techniques for a wide range of scenarios.Ramon y Cajal Grant RYC-2016-19383 Basque Government under the grant "Artificial Intelligence in BCAM number EXP. 2019/00432" Iberdrola Foundation under the 2019 Research Grant
    • 

    corecore