719 research outputs found

    Strong Types for Direct Logic

    Get PDF
    This article follows on the introductory article “Direct Logic for Intelligent Applications” [Hewitt 2017a]. Strong Types enable new mathematical theorems to be proved including the Formal Consistency of Mathematics. Also, Strong Types are extremely important in Direct Logic because they block all known paradoxes[Cantini and Bruni 2017]. Blocking known paradoxes makes Direct Logic safer for use in Intelligent Applications by preventing security holes. Inconsistency Robustness is performance of information systems with pervasively inconsistent information. Inconsistency Robustness of the community of professional mathematicians is their performance repeatedly repairing contradictions over the centuries. In the Inconsistency Robustness paradigm, deriving contradictions has been a progressive development and not “game stoppers.” Contradictions can be helpful instead of being something to be “swept under the rug” by denying their existence, which has been repeatedly attempted by authoritarian theoreticians (beginning with some Pythagoreans). Such denial has delayed mathematical development. This article reports how considerations of Inconsistency Robustness have recently influenced the foundations of mathematics for Computer Science continuing a tradition developing the sociological basis for foundations. Mathematics here means the common foundation of all classical mathematical theories from Euclid to the mathematics used to prove Fermat's Last [McLarty 2010]. Direct Logic provides categorical axiomatizations of the Natural Numbers, Real Numbers, Ordinal Numbers, Set Theory, and the Lambda Calculus meaning that up a unique isomorphism there is only one model that satisfies the respective axioms. Good evidence for the consistency Classical Direct Logic derives from how it blocks the known paradoxes of classical mathematics. Humans have spent millennia devising paradoxes for classical mathematics. Having a powerful system like Direct Logic is important in computer science because computers must be able to formalize all logical inferences (including inferences about their own inference processes) without requiring recourse to human intervention. Any inconsistency in Classical Direct Logic would be a potential security hole because it could be used to cause computer systems to adopt invalid conclusions. After [Church 1934], logicians faced the following dilemma: ‱ 1st order theories cannot be powerful lest they fall into inconsistency because of Church’s Paradox. ‱ 2nd order theories contravene the philosophical doctrine that theorems must be computationally enumerable. The above issues can be addressed by requiring Mathematics to be strongly typed using so that: ‱ Mathematics self proves that it is “open” in the sense that theorems are not computationally enumerable. ‱ Mathematics self proves that it is formally consistent. ‱ Strong mathematical theories for Natural Numbers, Ordinals, Set Theory, the Lambda Calculus, Actors, etc. are inferentially decidable, meaning that every true proposition is provable and every proposition is either provable or disprovable. Furthermore, theorems of these theories are not enumerable by a provably total procedure

    Strong Types for Direct Logic

    Get PDF
    This article follows on the introductory article “Direct Logic for Intelligent Applications” [Hewitt 2017a]. Strong Types enable new mathematical theorems to be proved including the Formal Consistency of Mathematics. Also, Strong Types are extremely important in Direct Logic because they block all known paradoxes[Cantini and Bruni 2017]. Blocking known paradoxes makes Direct Logic safer for use in Intelligent Applications by preventing security holes. Inconsistency Robustness is performance of information systems with pervasively inconsistent information. Inconsistency Robustness of the community of professional mathematicians is their performance repeatedly repairing contradictions over the centuries. In the Inconsistency Robustness paradigm, deriving contradictions has been a progressive development and not “game stoppers.” Contradictions can be helpful instead of being something to be “swept under the rug” by denying their existence, which has been repeatedly attempted by authoritarian theoreticians (beginning with some Pythagoreans). Such denial has delayed mathematical development. This article reports how considerations of Inconsistency Robustness have recently influenced the foundations of mathematics for Computer Science continuing a tradition developing the sociological basis for foundations. Mathematics here means the common foundation of all classical mathematical theories from Euclid to the mathematics used to prove Fermat's Last [McLarty 2010]. Direct Logic provides categorical axiomatizations of the Natural Numbers, Real Numbers, Ordinal Numbers, Set Theory, and the Lambda Calculus meaning that up a unique isomorphism there is only one model that satisfies the respective axioms. Good evidence for the consistency Classical Direct Logic derives from how it blocks the known paradoxes of classical mathematics. Humans have spent millennia devising paradoxes for classical mathematics. Having a powerful system like Direct Logic is important in computer science because computers must be able to formalize all logical inferences (including inferences about their own inference processes) without requiring recourse to human intervention. Any inconsistency in Classical Direct Logic would be a potential security hole because it could be used to cause computer systems to adopt invalid conclusions. After [Church 1934], logicians faced the following dilemma: ‱ 1st order theories cannot be powerful lest they fall into inconsistency because of Church’s Paradox. ‱ 2nd order theories contravene the philosophical doctrine that theorems must be computationally enumerable. The above issues can be addressed by requiring Mathematics to be strongly typed using so that: ‱ Mathematics self proves that it is “open” in the sense that theorems are not computationally enumerable. ‱ Mathematics self proves that it is formally consistent. ‱ Strong mathematical theories for Natural Numbers, Ordinals, Set Theory, the Lambda Calculus, Actors, etc. are inferentially decidable, meaning that every true proposition is provable and every proposition is either provable or disprovable. Furthermore, theorems of these theories are not enumerable by a provably total procedure

    CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning

    Full text link
    Program synthesis or code generation aims to generate a program that satisfies a problem specification. Recent approaches using large-scale pretrained language models (LMs) have shown promising results, yet they have some critical limitations. In particular, they often follow a standard supervised fine-tuning procedure to train a code generation model only from the pairs of natural-language problem descriptions and ground-truth programs. Such paradigm largely ignores some important but potentially useful signals in the problem specification such as unit tests, which thus often results in poor performance when solving complex unseen coding tasks. To address the limitations, we propose "CodeRL", a new framework for program synthesis tasks through pretrained LMs and deep reinforcement learning (RL). Specifically, during training, we treat the code-generating LM as an actor network, and introduce a critic network that is trained to predict the functional correctness of generated programs and provide dense feedback signals to the actor. During inference, we introduce a new generation procedure with a critical sampling strategy that allows a model to automatically regenerate programs based on feedback from example unit tests and critic scores. For the model backbones, we extended the encoder-decoder architecture of CodeT5 with enhanced learning objectives, larger model sizes, and better pretraining data. Our method not only achieves new SOTA results on the challenging APPS benchmark, but also shows strong zero-shot transfer capability with new SOTA results on the simpler MBPP benchmark

    Kentucky Vehicle License Plate Study

    Get PDF
    This study assesses Kentucky’s options for potentially re-plating all motor vehicles registered in the Commonwealth. The report begins with a background and discussion of Kentucky’s plate production processes, the difference between flat and embossed plates, and the structure of license plate labor at the Kentucky State Reformatory in La Grange. It details current plate production costs and processes, along with fees and production numbers. It evaluates three scenarios for future plate production: flat plate production, a hybrid system with embossed general issue plates and flat specialty plates, and an embossed plate system with in-house printed sheeting. Also included is an analysis of the effects of license plate characteristics on automated license plate reader accuracy, which has implications for automated screening and tolling. From there, the policies and approaches of other states are discussed. The report ends with a discussion of implementation costs, challenges, and strategies for state officials

    Context-based multimedia semantics modelling and representation

    Get PDF
    The evolution of the World Wide Web, increase in processing power, and more network bandwidth have contributed to the proliferation of digital multimedia data. Since multimedia data has become a critical resource in many organisations, there is an increasing need to gain efficient access to data, in order to share, extract knowledge, and ultimately use the knowledge to inform business decisions. Existing methods for multimedia semantic understanding are limited to the computable low-level features; which raises the question of how to identify and represent the high-level semantic knowledge in multimedia resources.In order to bridge the semantic gap between multimedia low-level features and high-level human perception, this thesis seeks to identify the possible contextual dimensions in multimedia resources to help in semantic understanding and organisation. This thesis investigates the use of contextual knowledge to organise and represent the semantics of multimedia data aimed at efficient and effective multimedia content-based semantic retrieval.A mixed methods research approach incorporating both Design Science Research and Formal Methods for investigation and evaluation was adopted. A critical review of current approaches for multimedia semantic retrieval was undertaken and various shortcomings identified. The objectives for a solution were defined which led to the design, development, and formalisation of a context-based model for multimedia semantic understanding and organisation. The model relies on the identification of different contextual dimensions in multimedia resources to aggregate meaning and facilitate semantic representation, knowledge sharing and reuse. A prototype system for multimedia annotation, CONMAN was built to demonstrate aspects of the model and validate the research hypothesis, H₁.Towards providing richer and clearer semantic representation of multimedia content, the original contributions of this thesis to Information Science include: (a) a novel framework and formalised model for organising and representing the semantics of heterogeneous visual data; and (b) a novel S-Space model that is aimed at visual information semantic organisation and discovery, and forms the foundations for automatic video semantic understanding

    An Action Research Case Study: A Sociocultural Perspective on Native American Students Learning Mathematics in a Public Elementary School Classroom

    Get PDF
    This dissertation is a qualitative study utilizing action research methods to develop a case study on the experience of urban public elementary school Native American students in collaborative mathematics activities. Data was collected with observations, mathematics assessments, and interviews to study how public school Native first graders experience collaborative mathematics activities when culturally modified with Indigenous ways of knowing and being? A major challenge in analyzing this work was finding a theoretical framework that could explain the experience of Native students in a multicultural public school. Sociocultural theory was selected because it operationalizes the key features of the study: Indigenous ways of knowing and being and the joint activity of learning mathematics in a public school classroom. The study suggests that Native students were heterogeneous learners and responded to cognitive pluralism, a variety of instruction, student practice options and assessments, in differentiated ways. Furthermore, the collaborative activity of learning mathematics was influenced by the affective factors developed in classroom culture. The teacher designed the classroom as a caring community that acknowledged students cultures with an appreciation and respect for the reality of student lives. The findings from this study suggest that collaborative mathematics activities can promote Native students\u27 learning when teacher and student participation is varied in style and function, and when this joint activity is nested within a larger context of a supportive community classroom. Key to this premise is the concept of nested joint activities; where the synergy of learning operates within and between joint activities.\u2

    Music Notation-to-Color Synesthesia and Early Stages of Music Education: A Grounded Theory Study

    Get PDF
    Problem Synesthesia is a neurological condition characterized by over-abundant neural connectivity between commonly highly specialized areas of the brain. The developmental form of the condition often results in automatic and consistent cross-sensory associations between perceived stimuli and commonly unrelated brain regions. This research contemplates the specific form of music notation-to-color synesthesia and its impact on early stages of music education. Synesthetes with this mode of the condition tend to involuntarily yet consistently associate music-notational concepts with colors, thus rendering their assimilation of these concepts unique and individualized. The purpose of this study is to determine the extent of these individualized experiences from original narratives. Method This study entails a grounded theory qualitative approach, through which 12 from participants were interviewed cross-culturally (7 featured nationalities). All participants were adults with music notation-to-color synesthesia who experienced music instruction in a Western cultural context. Data collection methodology involved a written survey, inperson (or live Zoom) interviews, and shared document analysis. Qualitative analytical methodology was used via coding strategies to discover surfacing themes, emerging issues, and commonalities among the narratives. Results Five overarching categories of commonalities were identified in this study. Firstly, participants shared generalities of synesthetic perceptions of music notation involving color, such as their awareness about their condition, the qualities of their experiences, the conceptual basis of their associations, among other characteristics. Interviewees also alluded to the mechanisms involved in the perception of music notation, such as the positive impact of their music notation-to-color synesthesia on memory as well as the negative implications of synesthetic incongruence. The spatial location of synesthetic perceptions varied among participants. Interviewees reported projecting on the page of music and associating in their mind\u27s eye —two common themes in the literature. Some participants, however, have also mentioned a middleground location that does not fit only one of these categories. Finally, this study analyzed themes relating to the implications of this form of synesthesia for music education, with attention to awareness on the part of educators, instructional intentionality, validation, reinforcement of student individuality, and conscious use of the condition. Moreover, other themes and future research possibilities were analyzed. -- Conclusion This study arrived at two grounded theory models. The first comprises a grounded theory of the experiences shared by participants. This theoretical model articulates the salient themes, such as positive and negative traits of notation-to-color perceptions and spatial location of perceptions. In special, this theory argues for a tendency for conceptually-based notation-to-color synesthesia among participants. The second grounded theory model advanced in this research entails an educational approach that would benefit awareness and intentionality in addressing students with music notation-to-color synesthesia. It discusses philosophical foundations, a theoretical framework, and methodological considerations that may transform how music notation-to-color students are accounted for in curricula. The study concludes by offering pedagogical suggestions derived from the methodological considerations. Firstly, it advances a linear process for identification, verification, and addressing of synesthesia. Secondly, it proposes the elimination of excessive notational information and gradual learning as initial strategies that could benefit music notation-to-color synesthetes in learning new notational elements

    Classification systems optimization with multi-objective evolutionary algorithms

    Get PDF
    L'optimisation des systĂšmes de classification est une tĂąche complexe qui requiert l'intervention d'un spĂ©cialiste (expĂ©rimentateur). Cette tĂąche exige une bonne connaissance du domaine d'application afin de rĂ©aliser l'extraction de l'information pertinente pour la mise en oeuvre du systĂšme de classification ou de reconnaissance. L'extraction de caractĂ©ristiques est un processus itĂ©ratif basĂ© sur l'expĂ©rience. Normalement plusieurs Ă©valuations de la performance en gĂ©nĂ©ralisation du systĂšme de reconnaissance, sur une base de donnĂ©es reprĂ©sentative du problĂšme rĂ©el, sont requises pour trouver l'espace de reprĂ©sentation adĂ©quat. Le processus d'extraction de caractĂ©ristiques est normalement suivi par une Ă©tape de sĂ©lection des caractĂ©ristiques pertinentes (FSS). L'objectif poursuivi est de rĂ©duire la complexitĂ© du systĂšme de reconnaissance tout en maintenant la performance en gĂ©nĂ©ralisation du systĂšme. Enfin, si le processus d'extraction de caractĂ©ristiques permet la gĂ©nĂ©ration de plusieurs reprĂ©sentations du problĂšme, alors il est possible d'obtenir un gain en performance en combinant plusieurs classificateurs basĂ©s sur des reprĂ©sentations complĂ©mentaires. L'ensemble de classificateurs (EoC) permet Ă©ventuellement une meilleure performance en gĂ©nĂ©ralisation pour le systĂšme de reconnaissance. Nous proposons dans cette thĂšse une approche globale pour l'automatisation des tĂąches d'extraction, de sĂ©lection de caractĂ©ristiques et de sĂ©lection des ensembles de classificateurs basĂ©s sur l'optimisation multicritĂšre. L'approche proposĂ©e est modulaire et celle-ci permet l'intĂ©gration de l'expertise de l'expĂ©rimentateur dans le processus d'optimisation. Deux algorithmes gĂ©nĂ©tiques pour l'optimisation multicritĂšre ont Ă©tĂ© Ă©valuĂ©s, le Fast Elitist Non-Dominated sorting Algorithm (NSGA-II) et le Multi-Objective Memetic Algorithm (MOMA). Les algorithmes d'optimisation ont Ă©tĂ© validĂ©s sur un problĂšme difficile, soit la reconnaissance de chiffres manuscrits isolĂ©s tirĂ©s de la base NIST SD19. Ensuite, notre mĂ©thode a Ă©tĂ© utilisĂ©e une seule fois sur un problĂšme de reconnaissance de lettres manuscrites, un problĂšme de reconnaissance provenant du mĂȘme domaine, pour lequel nous n'avons pas dĂ©veloppĂ© une grande expertise. Les rĂ©sultats expĂ©rimentaux sont concluants et ceux-ci ont permis de dĂ©montrer que la performance obtenue dĂ©passe celle de l'expĂ©rimentateur. Finalement, une contribution trĂšs importante de cette thĂšse rĂ©side dans la mise au point d'une mĂ©thode qui permet de visualiser et de contrĂŽler le sur-apprentissage reliĂ© aux algorithmes gĂ©nĂ©tiques utilisĂ©s pour l'optimisation des systĂšmes de reconnaissance. Les rĂ©sultats expĂ©rimentaux rĂ©vĂšlent que tous les problĂšmes d'optimisation Ă©tudiĂ©s (extraction et sĂ©lection de caractĂ©ristiques de mĂȘme que la sĂ©lection de classificateurs) souffrent Ă©ventuellement du problĂšme de sur-apprentissage. À ce jour, cet aspect n'a pas Ă©tĂ© traitĂ© de façon satisfaisante dans la littĂ©rature et nous avons proposĂ© une solution efficace pour contribuer Ă  la solution de ce problĂšme d'apprentissage

    Measuring software security from the design of software

    Get PDF
    The vast majority of our contemporary society owns a mobile phone, which has resulted in a dramatic rise in the amount of networked computers in recent years. Security issues in the computers have followed the same trend and nearly everyone is now affected by such issues. How could the situation be improved? For software engineers, an obvious answer is to build computer software with security in mind. A problem with building software with security is how to define secure software or how to measure security. This thesis divides the problem into three research questions. First, how can we measure the security of software? Second, what types of tools are available for measuring security? And finally, what do these tools reveal about the security of software? Measuring tools of these kind are commonly called metrics. This thesis is focused on the perspective of software engineers in the software design phase. Focus on the design phase means that code level semantics or programming language specifics are not discussed in this work. Organizational policy, management issues or software development process are also out of the scope. The first two research problems were studied using a literature review while the third was studied using a case study research. The target of the case study was a Java based email server called Apache James, which had details from its changelog and security issues available and the source code was accessible. The research revealed that there is a consensus in the terminology on software security. Security verification activities are commonly divided into evaluation and assurance. The focus of this work was in assurance, which means to verify one’s own work. There are 34 metrics available for security measurements, of which five are evaluation metrics and 29 are assurance metrics. We found, however, that the general quality of these metrics was not good. Only three metrics in the design category passed the inspection criteria and could be used in the case study. The metrics claim to give quantitative information on the security of the software, but in practice they were limited to evaluating different versions of the same software. Apart from being relative, the metrics were unable to detect security issues or point out problems in the design. Furthermore, interpreting the metrics’ results was difficult. In conclusion, the general state of the software security metrics leaves a lot to be desired. The metrics studied had both theoretical and practical issues, and are not suitable for daily engineering workflows. The metrics studied provided a basis for further research, since they pointed out areas where the security metrics were necessary to improve whether verification of security from the design was desired.Siirretty Doriast
    • 

    corecore