317,224 research outputs found

    A Non-formal Education Program to Enhance Drug Abuse Resilience Quotient of Youth At-risk of Drug Relapse: The Approaching of the Transformative Learning Theory and the Cognitive Behavioral Modification Concept

    Get PDF
    AbstractThe purpose of this research was to develop a non-formal education program based on Transformative Learning Theory and Cognitive Behavioral Modification Concept to enhance drug abuse resilience quotient of youth at-risk of drug relapse. The research procedure is divided into three phases: 1) to study the learner's need about the program to build the resilience quotient for the youth at-risk of drug relapse, 2) to develop the non-formal education program to build the resilience quotient for the youth at-risk of drug relapse, and 3) to study the impacts of the program application. The design of this study was the quasi-experimental research approach with two - group pretest and posttest. The experimental group consisted of 30 relapse drug at-risk youth who used the developed program. The controlled group consisted of 30 relapse drug at-risk youth who used the drug addicted treatment. Both groups were in the congested community in Klongtoey district, Bangkok, Thailand. The effects were compared between two groups by using t-test. The results showed that the youth preferred interactive activities such as game, role play to lecture and the experimental group gain more drug abuse resilience quotient than controlled group significantly. In conclusion, a non-formal education program based on transformative learning to enhance drug abuse resilience quotient of youth at-risk of drug relapse is suitable

    Predictive regression test selection technique by means of formal concept analysis

    Get PDF
    Regression testing is an important software maintenance process that is applied to validate the correctness of software modifications. Since regression testing is usually costly, selective regression testing can be an alternative to traditional regression testing, allowing for a reduction of the overall cost associated with testing. Selective regression testing identifies only the test cases that execute parts of a program that can potentially be affected by the modification. In this thesis, we propose a novel technique to perform selective regression testing by means of a data analysis technique, Formal Concept Analysis. We use the capability of Formal Concept Analysis to structure the commonalities of execution traces derived from existing test cases. Formal Concept Analysis provides information related to program execution dependency among different parts of a program, which can then be used to determine the relationships between a modified program portion and existing test cases. In this research, a novel approach analyzes the program execution dependency to allow for the selection of test cases that should be rerun after the program modification is complete. We have implemented a tool that automates regression test case selection and demonstrates a proof of our concept

    Differentially Private Linear Optimization for Multi-Party Resource Sharing

    Full text link
    This study examines a resource-sharing problem involving multiple parties that agree to use a set of capacities together. We start with modeling the whole problem as a mathematical program, where all parties are required to exchange information to obtain the optimal objective function value. This information bears private data from each party in terms of coefficients used in the mathematical program. Moreover, the parties also consider the individual optimal solutions as private. In this setting, the concern for the parties is the privacy of their data and their optimal allocations. We propose a two-step approach to meet the privacy requirements of the parties. In the first step, we obtain a reformulated model that is amenable to a decomposition scheme. Although this scheme eliminates almost all data exchanges, it does not provide a formal privacy guarantee. In the second step, we provide this guarantee with a locally differentially private algorithm, which does not need a trusted aggregator, at the expense of deviating slightly from the optimality. We provide bounds on this deviation and discuss the consequences of these theoretical results. We also propose a novel modification to increase the efficiency of the algorithm in terms of reducing the theoretical optimality gap. The study ends with a numerical experiment on a planning problem that demonstrates an application of the proposed approach. As we work with a general linear optimization model, our analysis and discussion can be used in different application areas including production planning, logistics, and revenue management

    The influence of farmers' mental models on an agroforestry extension program in the Philippines

    Get PDF
    The influence of farmers' mental models on the success of an agroforestry extension program on Leyte Island in the Philippines was investigated. Knowledge of farmers' mental models and hence the likely acceptance of technology was used to inform the design of a hypothetically expanded program. To gain an insight into the reasons behind differing acceptance of extension assistance, data were collected and analysed from formal interviews, translated conversations and visual observations. The data provided a chain of evidence and triangulation between farmers' stated intentions and their actions. Farmers had little prior knowledge of nursery technology and were highly receptive to extension assistance which enabled them to develop high self-efficacy in seedling production. However, farmers' rejection of silvicultural advice to thin and prune existing plantations was predicated by existing attitudes to forest resource management. Farmers also expressed a strong preference for a low-cost and low-input approach to establishing timber trees. Visual observations of farmers' tree establishment practices indicated the existence of gaps in their knowledge of tree growth processes. This investigation illustrates the need to elicit farmers' mental models as a parallel enquiry to extension activities. If agroforestry extension is to be constructivist and participatory, accommodation of farmers' mental models and modification of program goals may be necessary. Relatively little is known about the reasons for farmers' acceptance or rejection of silviculture in Leyte and these results indicate that further research into the way that farmers' mental models filter and guide acceptance of advice may be worthwhile

    An analysis of the constructive content of Henkin's proof of G\"odel's completeness theorem

    Full text link
    G{\"o}del's completeness theorem for classical first-order logic is one of the most basic theorems of logic. Central to any foundational course in logic, it connects the notion of valid formula to the notion of provable formula.We survey a few standard formulations and proofs of the completeness theorem before focusing on the formal description of a slight modification of Henkin's proof within intuitionistic second-order arithmetic.It is standard in the context of the completeness of intuitionistic logic with respect to various semantics such as Kripke or Beth semantics to follow the Curry-Howard correspondence and to interpret the proofs of completeness as programs which turn proofs of validity for these semantics into proofs of derivability.We apply this approach to Henkin's proof to phrase it as a program which transforms any proof of validity with respect to Tarski semantics into a proof of derivability.By doing so, we hope to shed an effective light on the relation between Tarski semantics and syntax: proofs of validity are syntactic objects with which we can compute.Comment: R{\'e}dig{\'e} en 4 {\'e}tapes: 2013, 2016, 2022, 202

    POWERPLAY: Training an Increasingly General Problem Solver by Continually Searching for the Simplest Still Unsolvable Problem

    Get PDF
    Most of computer science focuses on automatically solving given computational problems. I focus on automatically inventing or discovering problems in a way inspired by the playful behavior of animals and humans, to train a more and more general problem solver from scratch in an unsupervised fashion. Consider the infinite set of all computable descriptions of tasks with possibly computable solutions. The novel algorithmic framework POWERPLAY (2011) continually searches the space of possible pairs of new tasks and modifications of the current problem solver, until it finds a more powerful problem solver that provably solves all previously learned tasks plus the new one, while the unmodified predecessor does not. Wow-effects are achieved by continually making previously learned skills more efficient such that they require less time and space. New skills may (partially) re-use previously learned skills. POWERPLAY's search orders candidate pairs of tasks and solver modifications by their conditional computational (time & space) complexity, given the stored experience so far. The new task and its corresponding task-solving skill are those first found and validated. The computational costs of validating new tasks need not grow with task repertoire size. POWERPLAY's ongoing search for novelty keeps breaking the generalization abilities of its present solver. This is related to Goedel's sequence of increasingly powerful formal theories based on adding formerly unprovable statements to the axioms without affecting previously provable theorems. The continually increasing repertoire of problem solving procedures can be exploited by a parallel search for solutions to additional externally posed tasks. POWERPLAY may be viewed as a greedy but practical implementation of basic principles of creativity. A first experimental analysis can be found in separate papers [53,54].Comment: 21 pages, additional connections to previous work, references to first experiments with POWERPLA

    On the Use of Underspecified Data-Type Semantics for Type Safety in Low-Level Code

    Full text link
    In recent projects on operating-system verification, C and C++ data types are often formalized using a semantics that does not fully specify the precise byte encoding of objects. It is well-known that such an underspecified data-type semantics can be used to detect certain kinds of type errors. In general, however, underspecified data-type semantics are unsound: they assign well-defined meaning to programs that have undefined behavior according to the C and C++ language standards. A precise characterization of the type-correctness properties that can be enforced with underspecified data-type semantics is still missing. In this paper, we identify strengths and weaknesses of underspecified data-type semantics for ensuring type safety of low-level systems code. We prove sufficient conditions to detect certain classes of type errors and, finally, identify a trade-off between the complexity of underspecified data-type semantics and their type-checking capabilities.Comment: In Proceedings SSV 2012, arXiv:1211.587

    Learning a Static Analyzer from Data

    Full text link
    To be practically useful, modern static analyzers must precisely model the effect of both, statements in the programming language as well as frameworks used by the program under analysis. While important, manually addressing these challenges is difficult for at least two reasons: (i) the effects on the overall analysis can be non-trivial, and (ii) as the size and complexity of modern libraries increase, so is the number of cases the analysis must handle. In this paper we present a new, automated approach for creating static analyzers: instead of manually providing the various inference rules of the analyzer, the key idea is to learn these rules from a dataset of programs. Our method consists of two ingredients: (i) a synthesis algorithm capable of learning a candidate analyzer from a given dataset, and (ii) a counter-example guided learning procedure which generates new programs beyond those in the initial dataset, critical for discovering corner cases and ensuring the learned analysis generalizes to unseen programs. We implemented and instantiated our approach to the task of learning JavaScript static analysis rules for a subset of points-to analysis and for allocation sites analysis. These are challenging yet important problems that have received significant research attention. We show that our approach is effective: our system automatically discovered practical and useful inference rules for many cases that are tricky to manually identify and are missed by state-of-the-art, manually tuned analyzers
    corecore