20 research outputs found

    Constructing applicative functors

    Get PDF
    Applicative functors define an interface to computation that is more general, and correspondingly weaker, than that of monads. First used in parser libraries, they are now seeing a wide range of applications. This paper sets out to explore the space of non-monadic applicative functors useful in programming. We work with a generalization, lax monoidal functors, and consider several methods of constructing useful functors of this type, just as transformers are used to construct computational monads. For example, coends, familiar to functional programmers as existential types, yield a range of useful applicative functors, including left Kan extensions. Other constructions are final fixed points, a limited sum construction, and a generalization of the semi-direct product of monoids. Implementations in Haskell are included where possible

    How dangerousness evolves after court-ordered compulsory psychiatric admission: explorative prospec

    Get PDF
    Background Compulsory admission is commonly regarded as necessary and justified for patients whose psychiatric condition represents a severe danger to themselves and others. However, while studies on compulsory admissions have reported on various clinical and social outcomes, little research has focused specifically on dangerousness, which in many countries is the core reason for compulsory admission. Aims To study changes in dangerousness over time in adult psychiatric patients admitted by compulsory court order, and to relate these changes to these patients' demographic and clinical characteristics. Method In this explorative prospective observational cohort study of adult psychiatric patients admitted by compulsory court order, demographic and clinical data were collected at baseline. At baseline and at 6 and 12 month follow-up, dangerousness was assessed using the Dangerousness Inventory, an instrument based on the eight types of dangerousness towards self or others specified in Dutch legislation on compulsory admissions. We used descriptive statistics and logistic regression to analyse the data. Results We included 174 participants with a court-ordered compulsory admission. At baseline, the most common dangerousness criterion was inability to cope in society. Any type of severe or very severe dangerousness decreased from 86.2% at baseline to 36.2% at 6 months and to 28.7% at 12 months. Being homeless at baseline was the only variable which was significantly associated with persistently high levels of dangerousness. Conclusions Dangerousness decreased in about two-thirds of the patients after court-ordered compulsory admission. It persisted, however, in a substantial minority (approximately one-third). Declaration of interest None

    Radiative corrections to the excitonic molecule state in GaAs microcavities

    Full text link
    The optical properties of excitonic molecules (XXs) in GaAs-based quantum well microcavities (MCs) are studied, both theoretically and experimentally. We show that the radiative corrections to the XX state, the Lamb shift ΔXXMC\Delta^{\rm MC}_{\rm XX} and radiative width ΓXXMC\Gamma^{\rm MC}_{\rm XX}, are large, about 103010-30 % of the molecule binding energy ϵXX\epsilon_{\rm XX}, and definitely cannot be neglected. The optics of excitonic molecules is dominated by the in-plane resonant dissociation of the molecules into outgoing 1λ\lambda-mode and 0λ\lambda-mode cavity polaritons. The later decay channel, ``excitonic molecule \to 0λ\lambda-mode polariton + 0λ\lambda-mode polariton'', deals with the short-wavelength MC polaritons invisible in standard optical experiments, i.e., refers to ``hidden'' optics of microcavities. By using transient four-wave mixing and pump-probe spectroscopies, we infer that the radiative width, associated with excitonic molecules of the binding energy ϵXX0.91.1\epsilon_{\rm XX} \simeq 0.9-1.1 meV, is ΓXXMC0.20.3\Gamma^{\rm MC}_{\rm XX} \simeq 0.2-0.3 meV in the microcavities and ΓXXQW0.1\Gamma^{\rm QW}_{\rm XX} \simeq 0.1 meV in a reference GaAs single quantum well (QW). We show that for our high-quality quasi-two-dimensional nanostructures the T2=2T1T_2 = 2 T_1 limit, relevant to the XX states, holds at temperatures below 10 K, and that the bipolariton model of excitonic molecules explains quantitatively and self-consistently the measured XX radiative widths. We also find and characterize two critical points in the dependence of the radiative corrections against the microcavity detuning, and propose to use the critical points for high-precision measurements of the molecule bindingenergy and microcavity Rabi splitting.Comment: 16 pages, 11 figures, accepted for publication in Phys. Rev.

    Embedded Compilers

    No full text
    For automation it is important to express the knowledge of the experts in a form that is understood by a computer. Each area of knowledge has its own terminology and ways of formulating things; be it by drawing diagrams, using formulae, or using formalized languages. In the last case we say we have a "Domain Specific Language", and --since it is formalised-- it can usually be processed or analysed by a computer or even be compiled into machine code. Domain-specific languages often do not have a formal specification and are usually designed and implemented in an ad-hoc fashion, frequently leading to inconsistent designs. Furthermore they often lack features that are usually found in programming languages, such as abstraction mechanisms, type systems, and static analysis. Such features are often considered to take too much effort to implement. Initially these features are not really missed, however, when programs grow, they become indispensable. Defining a complete and consistent language from scratch is not an easy task, and so the question arises: how to support the design and implementation of domain-specific languages? A solution is to embed a domain-specific language in a general-purpose host language. This approach has many advantages. One does not have to implement an entirely new compiler, one can simply reuse the features of the host language, such as the type system and abstraction mechanisms. Futhermore, different embedded languages can be combined together in a single program. In the Haskell community embedding domain-specific languages by means of combinator libraries is common practice. The rich type system and flexible notational features, such as user-defined operators, type classes, and do-notation, make Haskell very suitable for embedding domain-specific languages. Originally combinator-based embedded languages directly expressed the denotational semantics of the embedded language. As a consequence, techniques used in conventional compilers are not applicable because the representation of the embedded progam is implicit. A logical next step along this line of development is to first build an intermediate structure, which can then be analyzed, transformed and optimized as in an ordinary compiler. How to do such things effectively in a strongly typed host language is the subject of this thesis: Embedded Compilers

    Typed Transformations of Typed Grammars: The Left Corner Transform

    Get PDF
    One of the questions which comes up when using embedded domain specific languages is to what extent we can analyze and transform embedded programs, as normally done in more conventional compilers. Special problems arise when the host language is strongly typed, and this host type system is used to type the embedded language. In this paper we describe how we can use a library, which was designed for constructing transformations of typed abstract syntax, in the removal of left recursion from a typed grammar description. The algorithm we describe is the Left-Corner Transform, which is small enough to be fully explained, involved enough to be interesting, and complete enough to serve as a tutorial on how to proceed in similar cases. The described transformation has been successfully used in constructing a compositional and efficient alternative to the standard Haskell read function

    Preserving order in non-order preserving parsers

    No full text

    Preserving order in non-order preserving parsers

    No full text

    Dependently Typed Grammars

    No full text
    Abstract. Parser combinators are a popular tool for designing parsers in functional programming languages. If such combinators generate an abstract representation of the grammar as an intermediate step, it becomes easier to perform analyses and transformations that can improve the behaviour of the resulting parser. Grammar transformations must satisfy a number of invariants. In particular, they have to preserve the semantics associated with the grammar. Using conventional type systems, these constraints cannot be expressed satisfactorily, but as we show in this article, dependent types are a natural fit. We present a framework for grammars and grammar transformations using Agda. We implement the left-corner transformation for left-recursion removal and prove a language-inclusion property as use cases. Key words: context-free grammars, grammar transformation, dependently typed programming
    corecore