16 research outputs found

    A Constructive Mathematic approach for Natural Language formal grammars

    Get PDF
    A mathematical description of natural language grammars has been proposed first by Leibniz. After the definition given by Frege of unsaturated expression and the foundation of a logical grammar by Husserl, the application of logic to treat natural language grammars in a computational way raised the interest of linguists, for example applying Lambek's categorial calculus. In recent years, the most consolidated formal grammars (e.g., Minimalism, HPSG, TAG, CCG, Dependency Grammars) began to show an interest in giving a strong psychological interpretation to the formalism and hence to natural language data on which they are applied. Nevertheless, no one seems to have paid much attention to cognitive linguistics, a branch of linguistics that actively uses concepts and results from cognitive sciences. Apparently unrelated, the study of computational concepts and formalisms has developed in pair with constructive formal systems, especially in the branch of logic called proof theory, see, e.g., the Curry-Howard isomorphism and the typed functional languages. In this paper, we want to bridge these worlds and thus present our natural language formalism, called Adpositional Grammars (AdGrams), that is founded over both cognitive linguistics and constructive mathematics

    The Minimal Levels of Abstraction in the History of Modern Computing

    Get PDF
    From the advent of general-purpose, Turing-complete machines, the relation between operators, programmers, and users with computers can be seen in terms of interconnected informational organisms (inforgs) henceforth analysed with the method of levels of abstraction (LoAs), risen within the Philosophy of Informa- tion (PI). In this paper, the epistemological levellism proposed by L. Floridi in the PI to deal with LoAs will be formalised in constructive terms using category the- ory, so that information itself is treated as structure-preserving functions instead of Cartesian products. The milestones in the history of modern computing are then analysed via constructive levellism to show how the growth of system complexity lead to more and more information hiding

    Constructive Conversation Analysis in psychotherapy: cognitive relevance of actants in terms of linguistic constructions

    Get PDF
    Pychotherapists produce pseudo-structured discourse with their clients that can be analysed with linguistics and pragmatics. Conversation Analysis is often qualitative, non-sistematic. The Therapeutic Cycles Model (TCM) uses ad-hoc software to perform textual analysis of psychoterapeutic transcripts, in order to elicit significant elements in the therapeutic interaction, but it does not consider linguistic constructions as units of analysis. Constructive Adpositional Grammars (CxAdGrams) are the ground for the Conversation Analysis so to fill the gap left by the TCM

    From Computing Machineries to Cloud Computing: The Minimal Levels of Abstraction of Inforgs through History

    Get PDF
    Before the modern computing era, the word `computers' referred to human beings as living calculators---in fact, still Turing (1950) proposed his test for A.I. referring to `computing machinery', not `computers'. From the advent of general-purpose, Turing-complete machines the relation between operators, programmers and users with computers---inforgs, in Floridi's terms---can be seen in terms of levels of abstraction (LoA) (Floridi 2010, 2008). For example, the modern concept of `operating system' (Donovan 1974) by Ken Thompson\ud from Multics to Unix can be seen as a level of abstraction: some computational tasks are hidden in an abstract machine put into the computer system so that humans can forget it instead of manually perform the task as living operators: information got hidden, without being lost. In this paper an analysis of LoA throughout history is proposed, in order to find the minimal number of LoAs\ud needed to explain the epistemology of informational organisms (inforgs)---from early modern general-purpose computing machineries until the so-called `cloud computing'. This epistemological levellism uses Category Theory as the methodological reference, treating information as structure-preserving functions instead of Cartesian products, i.e., a domain mapped into a codomain where the\ud inner structure is preserved; a comparison with the method of LoA by Floridi (2008) is then proposed

    Outcomes from elective colorectal cancer surgery during the SARS-CoV-2 pandemic

    Get PDF
    This study aimed to describe the change in surgical practice and the impact of SARS-CoV-2 on mortality after surgical resection of colorectal cancer during the initial phases of the SARS-CoV-2 pandemic

    Search for supersymmetry with a dominant R-parity violating LQ anti-D coupling in e+ e- collisions at center-of-mass energies of 130-GeV to 172-GeV

    No full text
    A search for pair-production of supersymmetric particles under the assumption that R-parity is violated via a dominant LQ (D) over bar coupling has been performed using the data collected by ALEPH at centre-of-mass energies of 130-172 GeV. The observed candidate events in the data are in agreement with the Standard Model expectation. This result is translated into lower limits on the masses of charginos, neutralinos, sleptons, sneutrinos and squarks. For instance, for m(o) = 500 GeV/c(2) and tan beta = root 2 charginos with masses smaller than 81 GeV/c(2) and neutralinos with masses smaller than 29 GeV/c(2) are excluded at the 95% confidence level for any generation structure of the LQ (D) over bar coupling

    Michel parameters and tau-neutrino helicity from decay correlations in Z --> tau+ tau-

    Get PDF
    The Michel parameters and the average τ-neutrino helicity are measured using correlations between the decays of the τ+ and τ− produced on the Z resonance and observed in the ALEPH detector at LEP. The Michel parameters, ϱl, ηl, ξl, (δξ)l, are determined from with l = (e, μ), and the average τ neutrino helicity, 〈h(ντ)〉, from τ → hν with h = (π, ϱ, a1). The results obtained with e−μ universality are: ϱl = 0.751±0.039±0.022, ηl = −0.04±0.15±0.11, ξl = 1.18±0.15±0.06, (δξ)l = 0.88±0.11±0.07, and the average τ neutrino helicity 〈h(ντ)〉 = −1.006±0.032±0.019. No significant deviation from the Standard Model V-A prediction is observed

    Michel parameters and tau-neutrino helicity from decay correlations in Z --> tau+ tau-

    No full text
    The Michel parameters and the average τ-neutrino helicity are measured using correlations between the decays of the τ+ and τ− produced on the Z resonance and observed in the ALEPH detector at LEP. The Michel parameters, ϱl, ηl, ξl, (δξ)l, are determined from with l = (e, μ), and the average τ neutrino helicity, 〈h(ντ)〉, from τ → hν with h = (π, ϱ, a1). The results obtained with e−μ universality are: ϱl = 0.751±0.039±0.022, ηl = −0.04±0.15±0.11, ξl = 1.18±0.15±0.06, (δξ)l = 0.88±0.11±0.07, and the average τ neutrino helicity 〈h(ντ)〉 = −1.006±0.032±0.019. No significant deviation from the Standard Model V-A prediction is observed

    Search for CP violation in the decay Z ---> tau+ tau-

    No full text
    Data collected by ALEPH in the years 1990, 1991 and 1992 have been used to update a previous search for CP violation in the decay of the Z into tau(+)tau(-). The measurement of the weak dipole form factor of the tau lepton has been performed by studying correlations between the tau leptons. No signal of CP violation was found. The weak dipole form factor is found to be over tilde d(tau) = (+0.15 +/- 0.58(stat) +/- 0.38(sys)) 10(-17)e x cm, obtained with 19628 identified tau(+)tau(-) events. This gives an upper limit on the weak dipole form factor of |over tilde d(tau)| < 1.5 . 10(-17)e x cm at the 95% confidence level

    MEASUREMENT OF THE B-]TAU(-)(NU)OVER-BAR-(TAU)X BRANCHING RATIO AND AN UPPER LIMIT ON B--]TAU(-)(NU)OVER-BAR-(TAU)

    No full text
    Using 1.45 million hadronic Z decays collected by the ALEPH experiment at LEP, the b --&gt; tau(-) ()overbarV(tau)Xbranchingratioismeasuredtobe2.75+/0.30+/0.37) over bar V(tau)X branching ratio is measured to be 2.75 +/- 0.30 +/- 0.37%. In addition an upper limit of 1.8 x 10(-3) at 90% confidence level is placed upon the exclusive branching ratio of B- --&gt; tau(-)() over bar V-tau. These measurements are consistent with SM expectations, and put the constraint tan beta/M(H+) &lt; 0.52 GeV-1 at 90% confidence level on all Type II two Higgs doublet models (such as the MSSM)
    corecore