3,566 research outputs found

    Building Portfolios for the Protein Structure Prediction Problem

    Get PDF
    International audienceThis paper, concerned with the protein structure prediction problem, aims at automatically selecting the Constraint Satisfaction algorithm best suited to the problem instance at hand. The contribution is twofold. Firstly, the selection criterion is the quality (minimal cost) in expectation of the solution found after a fixed amount of time, as opposed to the expected runtime. Secondly, the presented approach, based on supervised Machine Learning algorithms, considers the original description of the protein structure problem, as opposed to the features related to the SAT or CSP encoding of the problem

    A Comprehensive Study of k-Portfolios of Recent SAT Solvers

    Get PDF
    Hard combinatorial problems such as propositional satisfiability are ubiquitous. The holy grail are solution methods that show good performance on all problem instances. However, new approaches emerge regularly, some of which are complementary to existing solvers in that they only run faster on some instances but not on many others. While portfolios, i.e., sets of solvers, have been touted as useful, putting together such portfolios also needs to be efficient. In particular, it remains an open question how well portfolios can exploit the complementarity of solvers. This paper features a comprehensive analysis of portfolios of recent SAT solvers, the ones from the SAT Competitions 2020 and 2021. We determine optimal portfolios with exact and approximate approaches and study the impact of portfolio size k on performance. We also investigate how effective off-the-shelf prediction models are for instance-specific solver recommendations. One result is that the portfolios found with an approximate approach are as good as the optimal solution in practice. We also observe that marginal returns decrease very quickly with larger k, and our prediction models do not give way to better performance beyond very small portfolio sizes

    Increasing student achievement with cooperative learning, curriculum changes, and portfolio assessment

    Get PDF
    The purpose of this project was to determine if student achievement could be approved by curriculum changes within a cooperative learning framework and the use of portfolio assessment. Students in two classes were involved in the study. Both classes received the same instruction, worksheet and quizzes. In addition, one class worked individually to complete the worksheets to prepare for weekly quizzes. The other class used the cooperative learning technique - Student Teams Achievement Divisions. The cooperative learning students worked in groups of four to complete the worksheets to prepare for the weekly quizzes. The goal was to have everyone in the group learn the material to have the highest team average score on the quiz and to gain improvement points. Improvement points were awarded each week and a record of the results posted for each team. A statistical analysis of the second marking test scores of the classes showed a significant difference in their level of achievement. It was established that the noncooperative learning class demonstrated a higher academic level of achievement. Students were given a pretest and a posttest before and after the unit to measure cognitive gains. The results of the analysis of the pretest and posttest results represent a reduction in the gap between the two classes. Further time is needed to determine the impact of keeping a portfolio on the achievement of the less motivated chemistry student

    Reinforcement learning for portfolio optmization

    Get PDF
    In this study, the potential of using Reinforcement Learning for Portfolio Optimization is investigated, considering the constraints set by the stock market, such as liquidity, slippage, and transaction costs. Five Deep Reinforcement Learning (DRL) agents are trained in two different environments to test the agents' ability to learn the best trading strategies to allocate assets, expecting to generate higher cumulative returns. All agents used are model-free and already optimized for financial problems, using the FinRL library. Therefore, the state-space has a high dimension, as found in the financial market environments. The two proposed environments use market data from US stocks, and one of them also uses Finsent data, an alternative data source that contains the news sentiment for all the stocks that are part of Dow Jones Industrial Average (DJIA). A series of backtesting experiments were performed from the beginning of 2019 to the beginning of 2020 and compared the two environments and how the agents performed against the DJIA. All the results were assessed with the pyfolio Python library, which uses all standard metrics to evaluate portfolio performance. Some algorithms increased the cumulative returns compared to the first dataset. The best result obtained outperformed DJIA by a significant amount and a smaller drawdown

    Numerical and Evolutionary Optimization 2020

    Get PDF
    This book was established after the 8th International Workshop on Numerical and Evolutionary Optimization (NEO), representing a collection of papers on the intersection of the two research areas covered at this workshop: numerical optimization and evolutionary search techniques. While focusing on the design of fast and reliable methods lying across these two paradigms, the resulting techniques are strongly applicable to a broad class of real-world problems, such as pattern recognition, routing, energy, lines of production, prediction, and modeling, among others. This volume is intended to serve as a useful reference for mathematicians, engineers, and computer scientists to explore current issues and solutions emerging from these mathematical and computational methods and their applications

    Can Collaborative Knowledge Building Promote Both Scientific Processes and Science Achievement?

    Get PDF
    This study investigated the role of collective knowledge building in promoting scientific inquiry and achievements among Hong Kong high-school chemistry students. The participants included 34 Grade 10 (15-16 years old) students who engaged in collective inquiry and progressive discourse, using Knowledge Forum@, a computer-supported learning environment. A comparison class of 35 students also participated in the study. The instructional design, premised on knowledge-building principles including epistemic agency, improvable ideas and community knowledge, consisted of several components: developing a collaborative classroom culture, engaging in problem-centered inquiry, deepening the knowledge-building discourse, and aligning assessment with collective learning. Quantitative findings show that the students in the knowledge-building classroom outperformed the comparison students in scientific understanding with sustained effects in public examination. Analyses of knowledge-building dynamics indicate that the students showed deeper engagement and inquiry over time. Students’ collaboration and inquiry on Knowledge Forum significantly predicted their scientific understanding, over and above the effects of their prior science achievement. Qualitative analyses suggest how student’s knowledge-creation discourse, involving explanatory inquiry, constructive use of information and theory revision,can scaffold scientific understanding

    A New Approach for Analyzing Financial Markets Using Correlation Networks and Population Analysis

    Get PDF
    With the current availability of massive data sets associated with stock markets, we now have opportunities to apply newly developed big data techniques and data-driven methodologies to analyze these complicated markets. As stock market data continues to grow, analyzing the behavior of companies listed on the market becomes a massive task, even for high-performance computing systems. Hence, new big data techniques like network models are very much needed. We conducted this study on data collected from CRSP during the years 2000-2021 inclusively. In this study, we proposed a novel population analysis by constructing a correlation network model based on the monthly data of different companies’ excess returns; additionally, we employed the Louvain clustering algorithm to generate individual clusters/communities. After constructing correlation networks from input data, hidden knowledge was extracted from the network by using community detection and measuring network centralities. The Louvain algorithm was applied to the network as a data analysis shortcut tool and grouped different companies with high correlations or similar financial behavior over the period of study. In each community, different centralities were measured. Centrality measurements came from Closeness, Betweenness, and Eigen centralities for this study. The empirical result of this study showed that the meaning of centrality measurement in network analysis in the stock market has a different meaning compared to social network analysis. In most networks, high central entities are the most important entities; however, in this study, we learned that high centrality is not something that researchers should look for when developing and building a portfolio with low risk. What was discovered was that nodes in the network with lower degrees of centrality led to developing a diverse portfolio with lower risk, with acknowledgment of the Modern Portfolio Theory. Since this new approach was applied on the years 2000-2021, this study revealed behavioral patterns from stock movements depending on different events such as the 9/11 attacks, 2008 economic crashes, and the Covid-19 pandemic. As a result of this study, we would like to suggest a system based on a weighted portfolio to make a proper decision in selecting portfolios that can outperform the benchmark during normal circumstances or crises
    • …
    corecore