7,397 research outputs found

    Towards a Higher-Dimensional String Theory for the Modeling of Computerized Systems

    Get PDF
    International audienceRecent modeling experiments conducted in computational music give evidence that a number of concepts, methods and tools belonging to inverse semigroup theory can be attuned towards the concrete modeling of time-sensitive interactive systems. Further theoretical developments show that some related notions of higher-dimensional strings can be used as a unifying theme across word or tree automata theory. In this invited paper, we will provide a guided tour of this emerging theory both as an abstract theory and with a view to concrete applications

    Possibilities and Challenges of Using Educational Cheminformatics for STEM Education : A SWOT Analysis of a Molecular Visualization Engineering Project

    Get PDF
    This perspective paper analyses the possibilities and challenges of using cheminformatics as a context for STEM education. The objective is to produce theoretical insights through a SWOT analysis of an authentic educational cheminformatics project where future chemistry teachers engineered a physical 3D model using cheminformatics software and a 3D printer. In this article, engineering is considered as the connective STEM component binding technology (cheminformatics software and databases), science (molecular visualizations), and mathematics (graph theory) together in a pedagogically meaningful whole. The main conclusion of the analysis is that cheminformatics offers great possibilities for STEM education. It is a solution-centered research field that produces concrete artifacts such as visualizations, software, and databases. This is well-suited to STEM education, enabling an engineering-based approach that ensures students’ active and creative roles. The main challenge is a high content knowledge demand, derived from the multidisciplinary nature of cheminformatics. This challenge can be solved via training and collaborative learning environment design. Although the work with educational cheminformatics is still in its infancy, it seems a highly promising context for supporting chemistry learning via STEM education.Peer reviewe

    Experimental Mathematics

    Get PDF

    Experimental mathematics

    Get PDF

    Towards Unifying Structures in Higher Spin Gauge Symmetry

    Get PDF
    This article is expository in nature, outlining some of the many still incompletely understood features of higher spin field theory. We are mainly considering higher spin gauge fields in their own right as free-standing theoretical constructs and not circumstances where they occur as part of another system. Considering the problem of introducing interactions among higher spin gauge fields, there has historically been two broad avenues of approach. One approach entails gauging a non-Abelian global symmetry algebra, in the process making it local. The other approach entails deforming an already local but Abelian gauge algebra, in the process making it non-Abelian. In cases where both avenues have been explored, such as for spin 1 and 2 gauge fields, the results agree (barring conceptual and technical issues) with Yang-Mills theory and Einstein gravity. In the case of an infinite tower of higher spin gauge fields, the first approach has been thoroughly developed and explored by M. Vasiliev, whereas the second approach, after having lain dormant for a long time, has received new attention by several authors lately. In the present paper we briefly review some aspects of the history of higher spin gauge fields as a backdrop to an attempt at comparing the gauging vs. deforming approaches. A common unifying structure of strongly homotopy Lie algebras underlying both approaches will be discussed. The modern deformation approach, using BRST-BV methods, will be described as far as it is developed at the present time. The first steps of a formulation in the categorical language of operads will be outlined. A few aspects of the subject that seems not to have been thoroughly investigated are pointed out.Comment: This is a contribution to the Proc. of the Seventh International Conference ''Symmetry in Nonlinear Mathematical Physics'' (June 24-30, 2007, Kyiv, Ukraine), published in SIGMA (Symmetry, Integrability and Geometry: Methods and Applications) at http://www.emis.de/journals/SIGMA

    Evolutionary Computation and QSAR Research

    Get PDF
    [Abstract] The successful high throughput screening of molecule libraries for a specific biological property is one of the main improvements in drug discovery. The virtual molecular filtering and screening relies greatly on quantitative structure-activity relationship (QSAR) analysis, a mathematical model that correlates the activity of a molecule with molecular descriptors. QSAR models have the potential to reduce the costly failure of drug candidates in advanced (clinical) stages by filtering combinatorial libraries, eliminating candidates with a predicted toxic effect and poor pharmacokinetic profiles, and reducing the number of experiments. To obtain a predictive and reliable QSAR model, scientists use methods from various fields such as molecular modeling, pattern recognition, machine learning or artificial intelligence. QSAR modeling relies on three main steps: molecular structure codification into molecular descriptors, selection of relevant variables in the context of the analyzed activity, and search of the optimal mathematical model that correlates the molecular descriptors with a specific activity. Since a variety of techniques from statistics and artificial intelligence can aid variable selection and model building steps, this review focuses on the evolutionary computation methods supporting these tasks. Thus, this review explains the basic of the genetic algorithms and genetic programming as evolutionary computation approaches, the selection methods for high-dimensional data in QSAR, the methods to build QSAR models, the current evolutionary feature selection methods and applications in QSAR and the future trend on the joint or multi-task feature selection methods.Instituto de Salud Carlos III, PIO52048Instituto de Salud Carlos III, RD07/0067/0005Ministerio de Industria, Comercio y Turismo; TSI-020110-2009-53)Galicia. Consellería de Economía e Industria; 10SIN105004P

    Development and Validation of a Counterproductive Work Behavior Situational Judgment Test With an Open-ended Response Format: A Computerized Scoring Approach

    Get PDF
    Due to the many detrimental effects of counterproductive work behavior (CWB), it is important to measure the construct accurately. Despite this, there are some limitations inherent to current CWB measures that are somewhat problematic, including that they contain items that do not apply to all jobs while missing items that are important for other jobs (Bowling & Gruys, 2010). The current study tackles these issues by drawing on the benefits associated with open-ended response situational judgment tests (SJTs), such as them having the potential for more insight from respondents (Finch et al., 2018), to develop an open-ended response CWB SJT. To minimize the drawbacks currently associated with the manual analysis of open-ended response SJTs (e.g., being time-consuming and costly)—which is also a reason why they are rarely used— the study leverages natural language processing and machine learning to measure CWB. Using a two-dimensional conceptualization of CWB, including CWB against the organization (CWB-O) and individuals (CWB-I), the CWB SJT dimensions had a moderate to strong correlation with the popular CWB scale the Workplace Deviance scale (Bennett & Robinson, 2000). Findings further indicate the CWB SJT to be related to variables typically associated with CWB tendencies, such as neuroticism and trait self-control. By using topic modeling, it was also found that topic prevalence was largely consistent through time both for the full CWB SJT and for individual items, implying the test-retest reliability. The CWB SJT along with R code for analyzing the open-ended responses is provided. Implication of the CWB SJT for research and practice are discussed

    Validating argument-based opinion dynamics with survey experiments

    Full text link
    The empirical validation of models remains one of the most important challenges in opinion dynamics. In this contribution, we report on recent developments on combining data from survey experiments with computational models of opinion formation. We extend previous work on the empirical assessment of an argument-based model for opinion dynamics in which biased processing is the principle mechanism. While previous work (Banisch & Shamon, in press) has focused on calibrating the micro mechanism with experimental data on argument-induced opinion change, this paper concentrates on the macro level using the empirical data gathered in the survey experiment. For this purpose, the argument model is extended by an external source of balanced information which allows to control for the impact of peer influence processes relative to other noisy processes. We show that surveyed opinion distributions are matched with a high level of accuracy in a specific region in the parameter space, indicating an equal impact of social influence and external noise. More importantly, the estimated strength of biased processing given the macro data is compatible with those values that achieve high likelihood at the micro level. The main contribution of the paper is hence to show that the extended argument-based model provides a solid bridge from the micro processes of argument-induced attitude change to macro level opinion distributions. Beyond that, we review the development of argument-based models and present a new method for the automated classification of model outcomes.Comment: Keywords: opinion dynamics, validation, empirical confirmation, survey experiments, parameter estimation, argument communication theory, computational social scienc

    Complex adaptive systems based data integration : theory and applications

    Get PDF
    Data Definition Languages (DDLs) have been created and used to represent data in programming languages and in database dictionaries. This representation includes descriptions in the form of data fields and relations in the form of a hierarchy, with the common exception of relational databases where relations are flat. Network computing created an environment that enables relatively easy and inexpensive exchange of data. What followed was the creation of new DDLs claiming better support for automatic data integration. It is uncertain from the literature if any real progress has been made toward achieving an ideal state or limit condition of automatic data integration. This research asserts that difficulties in accomplishing integration are indicative of socio-cultural systems in general and are caused by some measurable attributes common in DDLs. This research’s main contributions are: (1) a theory of data integration requirements to fully support automatic data integration from autonomous heterogeneous data sources; (2) the identification of measurable related abstract attributes (Variety, Tension, and Entropy); (3) the development of tools to measure them. The research uses a multi-theoretic lens to define and articulate these attributes and their measurements. The proposed theory is founded on the Law of Requisite Variety, Information Theory, Complex Adaptive Systems (CAS) theory, Sowa’s Meaning Preservation framework and Zipf distributions of words and meanings. Using the theory, the attributes, and their measures, this research proposes a framework for objectively evaluating the suitability of any data definition language with respect to degrees of automatic data integration. This research uses thirteen data structures constructed with various DDLs from the 1960\u27s to date. No DDL examined (and therefore no DDL similar to those examined) is designed to satisfy the law of requisite variety. No DDL examined is designed to support CAS evolutionary processes that could result in fully automated integration of heterogeneous data sources. There is no significant difference in measures of Variety, Tension, and Entropy among DDLs investigated in this research. A direction to overcome the common limitations discovered in this research is suggested and tested by proposing GlossoMote, a theoretical mathematically sound description language that satisfies the data integration theory requirements. The DDL, named GlossoMote, is not merely a new syntax, it is a drastic departure from existing DDL constructs. The feasibility of the approach is demonstrated with a small scale experiment and evaluated using the proposed assessment framework and other means. The promising results require additional research to evaluate GlossoMote’s approach commercial use potential

    Two Essays on Analytical Capabilities: Antecedents and Consequences

    Get PDF
    Although organizations are rapidly embracing business analytics (BA) to enhance organizational performance, only a small proportion have managed to build analytical capabilities. While BA continues to draw attention from academics and practitioners, theoretical understanding of antecedents and consequences of analytical capabilities remain limited and lack a systematic view. In order to address the research gap, the two essays investigate: (a) the impact of organization’s core information processing mechanisms and its impact on analytical capabilities, (b) the sequential approach to integration of IT-enabled business processes and its impact on analytical capabilities, and (c) network position and its impact on analytical capabilities. Drawing upon the Information Processing Theory (IPT), the first essay investigates the relationship between organization’s core information processing mechanisms–i.e., electronic health record (EHRs), clinical information standards (CIS), and collaborative information exchange (CIE)–and its impact on analytical capabilities. We use data from two sources (HIMSS Analytics 2013 and AHA IT Survey 2013) to test the theorized relationships in the healthcare context empirically. Using the competitive progression theory, the second essay investigates whether organizations sequential approach to the integration of IT-enabled business processes is associated with increased analytical capabilities. We use data from three sources (HIMSS Analytics 2013, AHA IT Survey 2013, and CMS 2014) to test if sequential integration of EHRs –i.e., reflecting the unique organizational path of integration–has a significant impact on hospital’s analytical capability. Together the two essays advance our understanding of the factors that underlie enabling of firm’s analytical capabilities. We discuss in detail the theoretical and practical implications of the findings and the opportunities for future research
    corecore