16 research outputs found

    Is Financial Inclusion Effective in Eradicating Poverty?

    Get PDF
    This paper examines the relationship between financial inclusion and poverty in Uganda taking into account the effect of social intermediation and financial literacy. We employed a cross-sectional study design in rural western Uganda among 310 clients of Microfinance. Households of married people were more financially included and hence they were able to reduce their poverty status, Social intermediation was positively linked to financial inclusion, a positive link between financial literacy and financial inclusion was established. Education level and Financial Literacy does not have a significant positive effect on Financial Inclusion and Education interacts with Social Intermediation to positively influence the Financial Inclusion. Policies should emphasize the social aspects in the society to derive financial inclusion and hence tackle poverty. The study contributes to the existing research in the area of financial inclusion and enables future researchers to have a wide research base. The theoretical contribution is that we were able to integrate the theory of human capital in explaining financial inclusion from a developing country perspective

    Notions of Computation and Monads

    Get PDF
    The i.-calculus is considered a useful mathematical tool in the study of programming languages, since programs can be identified with I-terms. However, if one goes further and uses bn-conversion to prove equivalence of programs, then a gross simplification is introduced (programs are identified with total functions from calues to values) that may jeopardise the applicability of theoretical results, In this paper we introduce calculi. based on a categorical semantics for computations, that provide a correct basis for proving equivalence of programs for a wide range of notions of computation

    Decalf: A Directed, Effectful Cost-Aware Logical Framework

    Full text link
    We present decalf{\bf decalf}, a d{\bf d}irected, e{\bf e}ffectful c{\bf c}ost-a{\bf a}ware l{\bf l}ogical f{\bf f}ramework for studying quantitative aspects of functional programs with effects. Like calf{\bf calf}, the language is based on a formal phase distinction between the extension and the intension of a program, its pure behavior as distinct from its cost measured by an effectful step-counting primitive. The type theory ensures that the behavior is unaffected by the cost accounting. Unlike calf{\bf calf}, the present language takes account of effects, such as probabilistic choice and mutable state; this extension requires a reformulation of calf{\bf calf}'s approach to cost accounting: rather than rely on a "separable" notion of cost, here a cost bound is simply another program. To make this formal, we equip every type with an intrinsic preorder, relaxing the precise cost accounting intrinsic to a program to a looser but nevertheless informative estimate. For example, the cost bound of a probabilistic program is itself a probabilistic program that specifies the distribution of costs. This approach serves as a streamlined alternative to the standard method of isolating a recurrence that bounds the cost in a manner that readily extends to higher-order, effectful programs. The development proceeds by first introducing the decalf{\bf decalf} type system, which is based on an intrinsic ordering among terms that restricts in the extensional phase to extensional equality, but in the intensional phase reflects an approximation of the cost of a program of interest. This formulation is then applied to a number of illustrative examples, including pure and effectful sorting algorithms, simple probabilistic programs, and higher-order functions. Finally, we justify decalf{\bf decalf} via a model in the topos of augmented simplicial sets

    Retrofitting OCaml modules: Fixing signature avoidance in the generative case

    Get PDF
    International audienceML modules are offer large-scale notions of composition and modularity. Provided as an additional layer on top of the core language, they have proven both vital to the working OCaml and SML programmers, and inspiring to other use-cases and languages. Unfortunately, their meta-theory remains difficult to comprehend, requiring heavy machinery to prove their soundness. Building on a previous translation from ML modules to Fω , we propose a new comprehensive description of a generative subset of OCaml modules, embarking on a journey right from the source OCaml module system, up to Fω , and back. We pause in the middle to uncover a system, called canonical that combines the best of both worlds. On the way, we obtain type soundness, but also and more importantly, a deeper insight into the signature avoidance problem, along with ways to improve both the OCaml language and its typechecking algorithm

    OCaml modules: formalization, insights and improvements: Bringing the existential types into a generative subset

    Get PDF
    International audienceModularity is a key tool for building reusable and structured code. While the OCaml module system is known for being expressive and powerful, specifying its behavior and formalizing its properties has proven to be hard. We propose a comprehensive description of a generative subset of the OCaml module system (called the source system), with a focus on the signature avoidance problem. By extending the signature syntax with existential types (inspired by Fω ), we describe an alternate system (called canonical ) with a simpler set of judgments that both solves the issues of the source description and provides formal guarantees through elaboration into Fω . We show that the canonical system is more expressive than the source one and state at which conditions canonical typing derivations can be translated back into the source typing. As a middle point between the path-based representation of OCaml and the formal description of Fω , the canonical presentation is a framework which we believe could support numerous features (applicative functors, transparent ascription, module aliases, etc.) and could serve as a basis for a redefinition of module system of an industry-grade language as OCaml

    Focusing on Refinement Typing

    Full text link
    We present a logically principled foundation for systematizing, in a way that works with any computational effect and evaluation order, SMT constraint generation seen in refinement type systems for functional programming languages. By carefully combining a focalized variant of call-by-push-value, bidirectional typing, and our novel technique of value-determined indexes, our system generates solvable SMT constraints without existential (unification) variables. We design a polarized subtyping relation allowing us to prove our logically focused typing algorithm is sound, complete, and decidable. We prove type soundness of our declarative system with respect to an elementary domain-theoretic denotational semantics. Type soundness implies, relatively simply, the total correctness and logical consistency of our system. The relative ease with which we obtain both algorithmic and semantic results ultimately stems from the proof-theoretic technique of focalization.Comment: 61 pages + appendix with proofs, Just Accepted version of paper (with new title) at ACM Transactions on Programming Languages and System

    Array optimizations for high productivity programming languages

    Get PDF
    While the HPCS languages (Chapel, Fortress and X10) have introduced improvements in programmer productivity, several challenges still remain in delivering high performance. In the absence of optimization, the high-level language constructs that improve productivity can result in order-of-magnitude runtime performance degradations. This dissertation addresses the problem of efficient code generation for high-level array accesses in the X10 language. The X10 language supports rank-independent specification of loop and array computations using regions and points. Three aspects of high-level array accesses in X10 are important for productivity but also pose significant performance challenges: high-level accesses are performed through Point objects rather than integer indices, variables containing references to arrays are rank-independent, and array subscripts are verified as legal array indices during runtime program execution. Our solution to the first challenge is to introduce new analyses and transformations that enable automatic inlining and scalar replacement of Point objects. Our solution to the second challenge is a hybrid approach. We use an interprocedural rank analysis algorithm to automatically infer ranks of arrays in X10. We use rank analysis information to enable storage transformations on arrays. If rank-independent array references still remain after compiler analysis, the programmer can use X10's dependent type system to safely annotate array variable declarations with additional information for the rank and region of the variable, and to enable the compiler to generate efficient code in cases where the dependent type information is available. Our solution to the third challenge is to use a new interprocedural array bounds analysis approach using regions to automatically determine when runtime bounds checks are not needed. Our performance results show that our optimizations deliver performance that rivals the performance of hand-tuned code with explicit rank-specific loops and lower-level array accesses, and is up to two orders of magnitude faster than unoptimized, high-level X10 programs. These optimizations also result in scalability improvements of X10 programs as we increase the number of CPUs. While we perform the optimizations primarily in X10, these techniques are applicable to other high-productivity languages such as Chapel and Fortress
    corecore