521,115 research outputs found
Implementing a Civic Skills-Focused Problem Posing Model to Enhance Problem-Solving Competencies in Elementary School Students: A Systematic Literature Review
This systematic literature review explores the application of the Problem Posing Model, a holistic educational approach, in civic education to enhance elementary school students' problem-solving abilities. By fostering a comprehensive understanding of conflict resolution and cooperative problem solving, civic skills, derived from civic knowledge, become meaningful tools in addressing national and societal issues. This review methodology involves the identification, review, evaluation, and interpretation of empirical studies conducted over the past five years (2018-2022), focused on applying a civic skills-oriented problem-posing model in elementary education. The initial search yielded approximately 250 articles from databases such as ERIC, Google Scholar, and Sinta. After a meticulous process, 20 articles, primarily from high schools and universities, were chosen for the review using three primary keywords: problem-posing models, civic skills, and problem-solving abilities. The selected articles were published in international or national educational journals, including Scopus and Sinta, adhering to specific criteria such as the focus on Civics learning strategies, addressing misconceptions about the application of problem-posing models and problem-solving, and publication within the last five years in Scopus (Q1-Q4) and Sinta (S1-S4) indexed journals. The research findings indicate that the problem-posing model enhances students' critical thinking and creativity, proving more effective than conventional teaching methods. The integration of civic skills in Civics subjects boosts students' active participation, fostering attitudes of tolerance, cooperation, and responsibility. Therefore, the application of a civic skills-oriented problem-posing model can significantly improve students' ability to analyze problems critically, demonstrating openness, tolerance, and accountability
Interval linear regression analysis based on Minkowski difference – a bridge between traditional and interval linear regression models
summary:In this paper, we extend the traditional linear regression methods to the (numerical input)-(interval output) data case assuming both the observation/measurement error and the indeterminacy of the input-output relationship. We propose three different models based on three different assumptions of interval output data. In each model, the errors are defined as intervals by solving the interval equation representing the relationship among the interval output, the interval function and the interval error. We formalize the estimation problem of parameters of the interval function so as to minimize the sum of square/absolute interval errors. Introducing suitable interpretation of minimization of an interval function, each estimation problem is well-formulated as a quadratic or linear programming problem. It is shown that the proposed methods have close relation to both traditional and interval linear regression methods which are formulated in different manners
Feature Selection Methods for Uplift Modeling
Uplift modeling is a predictive modeling technique that estimates the
user-level incremental effect of a treatment using machine learning models. It
is often used for targeting promotions and advertisements, as well as for the
personalization of product offerings. In these applications, there are often
hundreds of features available to build such models. Keeping all the features
in a model can be costly and inefficient. Feature selection is an essential
step in the modeling process for multiple reasons: improving the estimation
accuracy by eliminating irrelevant features, accelerating model training and
prediction speed, reducing the monitoring and maintenance workload for feature
data pipeline, and providing better model interpretation and diagnostics
capability. However, feature selection methods for uplift modeling have been
rarely discussed in the literature. Although there are various feature
selection methods for standard machine learning models, we will demonstrate
that those methods are sub-optimal for solving the feature selection problem
for uplift modeling. To address this problem, we introduce a set of feature
selection methods designed specifically for uplift modeling, including both
filter methods and embedded methods. To evaluate the effectiveness of the
proposed feature selection methods, we use different uplift models and measure
the accuracy of each model with a different number of selected features. We use
both synthetic and real data to conduct these experiments. We also implemented
the proposed filter methods in an open source Python package (CausalML)
Topic supervised non-negative matrix factorization
Topic models have been extensively used to organize and interpret the
contents of large, unstructured corpora of text documents. Although topic
models often perform well on traditional training vs. test set evaluations, it
is often the case that the results of a topic model do not align with human
interpretation. This interpretability fallacy is largely due to the
unsupervised nature of topic models, which prohibits any user guidance on the
results of a model. In this paper, we introduce a semi-supervised method called
topic supervised non-negative matrix factorization (TS-NMF) that enables the
user to provide labeled example documents to promote the discovery of more
meaningful semantic structure of a corpus. In this way, the results of TS-NMF
better match the intuition and desired labeling of the user. The core of TS-NMF
relies on solving a non-convex optimization problem for which we derive an
iterative algorithm that is shown to be monotonic and convergent to a local
optimum. We demonstrate the practical utility of TS-NMF on the Reuters and
PubMed corpora, and find that TS-NMF is especially useful for conceptual or
broad topics, where topic key terms are not well understood. Although
identifying an optimal latent structure for the data is not a primary objective
of the proposed approach, we find that TS-NMF achieves higher weighted Jaccard
similarity scores than the contemporary methods, (unsupervised) NMF and latent
Dirichlet allocation, at supervision rates as low as 10% to 20%
Fault surface tracing automation using computer vision algorithms
This article presents the results of adapting the U-net convolutional neural network to solving the problem of tracing fault surfaces on 3D seismic cubes. Fault mapping is one of the stages of interpretation of the results of using the seismic methods of field geophysical work. The interpretation results are used to build structural frameworks of geological models, plan field development strategies, assess the hydrodynamic connectivity of reservoirs, plan well locations, their number, etc. The developed neural network algorithm, which uses computer vision algorithms, can significantly increase the speed of faults detection and reduce risk of skipping faults in interpretation process. The problems of using a neural network trained on a synthetic data set for solving practical problems are also considered. Methods for increasing reliability of seismic interpretation are proposed. In particular, by calculating and subsequent processing with neural network an additional volume of the coherence attribute. As a result of the study, a positive conclusion on the applicability of convolutional neural networks for solving problems of tracing fault surfaces is given
A review of simulation and application of agent-based model approaches
In the past, various traditional methods used experiments and statistical data to examine and solve the occurred problem and social-environmental issue. However, the traditional method is not suitable for expressing or solving the complex dynamics of human environmental crisis (such as the spread of diseases, natural disaster management, social problems, etc.). Therefore, the implementation of computational modelling methods such as Agent-Based Models (ABM) has become an effective technology for solving complex problems arising from the interpretation of human behaviour such as human society, environment, and biological systems. Overall, this article will outline the ABM model properties and its applications in the criminology, flood management, and the COVID-19 pandemic fields. In addition, this article will review the limitations that occurred to be overcome in the further development of the ABM model
Full Waveform Inversion and Lagrange Multipliers
Full-waveform inversion (FWI) is an effective method for imaging subsurface
properties using sparsely recorded data. It involves solving a wave propagation
problem to estimate model parameters that accurately reproduce the data. Recent
trends in FWI have led to the development of extended methodologies, among
which source extension methods leveraging reconstructed wavefields to solve
penalty or augmented Lagrangian (AL) formulations have emerged as robust
algorithms, even for inaccurate initial models. Despite their demonstrated
robustness, challenges remain, such as the lack of a clear physical
interpretation, difficulty in comparison, and reliance on difficult-to-compute
least squares (LS) wavefields. This paper is divided into two critical parts.
In the first, a novel formulation of these methods is explored within a unified
Lagrangian framework. This novel perspective permits the introduction of
alternative algorithms that employ LS multipliers instead of wavefields. These
multiplier-oriented variants appear as regularizations of the standard FWI, are
adaptable to the time domain, offer tangible physical interpretations, and
foster enhanced convergence efficiency. The second part of the paper delves
into understanding the underlying mechanisms of these techniques. This is
achieved by solving the FWI equations using iterative linearization and inverse
scattering methods. The paper provides insight into the role and significance
of Lagrange multipliers in enhancing the linearization of FWI equations. It
explains how different methods estimate multipliers or make approximations to
increase computing efficiency. Additionally, it presents a new physical
understanding of the Lagrange multiplier used in the AL method, highlighting
how important it is for improving algorithm performance when compared to
penalty methods
- …