440 research outputs found

    Traditional Accounting with Decentralised Ledger Technology

    Get PDF
    Distributed ledger technology is by some believe to be the accounting system of the future, replacing the centuries-old double-entry accounting paradigm, as it has desirable characteristics such as tamper-resistance. However, it might suffer from technology lock-in as double-entry bookkeeping, due to its long-standing history, has offered the conceptual foundations for many laws, regulations and business practices. While some of these laws, regulations and practices might become obsolete as a result of distributed ledger technology, some might still prove to be valuable in a new technological context. While aiming at unlocking the potential of distributed ledger technology in an accounting context, we also want to preserve the wisdom of accounting craftsman. For this reason, it is the aim of this paper to offer a bi-directional mapping between traditional double-entry bookkeeping and innovative paradigms that have proven their value in decentralised systems, of which distributed ledger technology is an exponent. This paper offers such a mapping for the Resource-Event-Agent paradigm

    Generalized inverse mean curvature flows in spacetime

    Full text link
    Motivated by the conjectured Penrose inequality and by the work of Hawking, Geroch, Huisken and Ilmanen in the null and the Riemannian case, we examine necessary conditions on flows of two-surfaces in spacetime under which the Hawking quasilocal mass is monotone. We focus on a subclass of such flows which we call uniformly expanding, which can be considered for null as well as for spacelike directions. In the null case, local existence of the flow is guaranteed. In the spacelike case, the uniformly expanding condition leaves a 1-parameter freedom, but for the whole family, the embedding functions satisfy a forward-backward parabolic system for which local existence does not hold in general. Nevertheless, we have obtained a generalization of the weak (distributional) formulation of this class of flows, generalizing the corresponding step of Huisken and Ilmanen's proof of the Riemannian Penrose inequality.Comment: 21 pages, 1 figur

    The OntoREA Accounting Model: Ontology-based Modeling of the Accounting Domain

    Get PDF
    McCarthy developed a framework for modeling the economic rationale of different business transactions along the enterprise value chain described in his seminal article “The REA Accounting Model – A Generalized Framework for Accounting Systems in a Shared Data Environment” Originally, the REA accounting model was specified in the entity-relationship (ER) language. Later on other languages – especially in form of generic data models and UML class models (UML language) – were used. Recently, the OntoUML language was developed by Guizzardi and used by Gailly et al. for a metaphysical reengineering of the REA enterprise ontology. Although the REA accounting model originally addressed the accounting domain, it most successfuly is applied as a reference framework for the conceptual modeling of enterprise systems. The primary research objective of this article is to anchor the REA-based models more deeply in the accounting domain. In order to achieve this objective, essential primitives of the REA model are identified and conceptualized in the OntoUML language within the Asset Liability Equity (ALE) context of the traditional ALE accounting domain

    A fast and intuitive method for calculating dynamic network reconfiguration and node flexibility

    Get PDF
    Dynamic interactions between brain regions, either during rest or performance of cognitive tasks, have been studied extensively using a wide variance of methods. Although some of these methods allow elegant mathematical interpretations of the data, they can easily become computationally expensive or difficult to interpret and compare between subjects or groups. Here, we propose an intuitive and computationally efficient method to measure dynamic reconfiguration of brain regions, also termed flexibility. Our flexibility measure is defined in relation to an a-priori set of biologically plausible brain modules (or networks) and does not rely on a stochastic data-driven module estimation, which, in turn, minimizes computational burden. The change of affiliation of brain regions over time with respect to these a-priori template modules is used as an indicator of brain network flexibility. We demonstrate that our proposed method yields highly similar patterns of whole-brain network reconfiguration (i.e., flexibility) during a working memory task as compared to a previous study that uses a data-driven, but computationally more expensive method. This result illustrates that the use of a fixed modular framework allows for valid, yet more efficient estimation of whole-brain flexibility, while the method additionally supports more fine-grained (e.g. node and group of nodes scale) flexibility analyses restricted to biologically plausible brain networks.</p

    Equivariant geometric K-homology for compact Lie group actions

    Full text link
    Let G be a compact Lie-group, X a compact G-CW-complex. We define equivariant geometric K-homology groups K^G_*(X), using an obvious equivariant version of the (M,E,f)-picture of Baum-Douglas for K-homology. We define explicit natural transformations to and from equivariant K-homology defined via KK-theory (the "official" equivariant K-homology groups) and show that these are isomorphism.Comment: 25 pages. v2: some mistakes corrected, more detail added, Michael Walter as author added. To appear in Abhandlungen aus dem Mathematischen Seminar der Universit\"at Hambur

    Parity Violating Measurements of Neutron Densities

    Get PDF
    Parity violating electron nucleus scattering is a clean and powerful tool for measuring the spatial distributions of neutrons in nuclei with unprecedented accuracy. Parity violation arises from the interference of electromagnetic and weak neutral amplitudes, and the Z0Z^0 of the Standard Model couples primarily to neutrons at low Q2Q^2. The data can be interpreted with as much confidence as electromagnetic scattering. After briefly reviewing the present theoretical and experimental knowledge of neutron densities, we discuss possible parity violation measurements, their theoretical interpretation, and applications. The experiments are feasible at existing facilities. We show that theoretical corrections are either small or well understood, which makes the interpretation clean. The quantitative relationship to atomic parity nonconservation observables is examined, and we show that the electron scattering asymmetries can be directly applied to atomic PNC because the observables have approximately the same dependence on nuclear shape.Comment: 38 pages, 7 ps figures, very minor changes, submitted to Phys. Rev.

    Strange Stars with a Density-Dependent Bag Parameter

    Full text link
    We have studied strange quark stars in the framework of the MIT bag model, allowing the bag parameter B to depend on the density of the medium. We have also studied the effect of Cooper pairing among quarks, on the stellar structure. Comparison of these two effects shows that the former is generally more significant. We studied the resulting equation of state of the quark matter, stellar mass-radius relation, mass-central-density relation, radius-central-density relation, and the variation of the density as a function of the distance from the centre of the star. We found that the density-dependent B allows stars with larger masses and radii, due to stiffening of the equation of state. Interestingly, certain stellar configurations are found to be possible only if B depends on the density. We have also studied the effect of variation of the superconducting gap parameter on our results.Comment: 23 pages, 8 figs; v2: 25 pages, 9 figs, version to be published in Phys. Rev. (D

    Motor cortical excitability and plasticity in patients with neurofibromatosis type 1

    Get PDF
    Objective: Neurofibromatosis type 1 (NF1) is an autosomal dominant genetic disorder that is associated with cognitive disabilities. Based on studies involving animals, the hypothesized cause of these disabilities results from increased activity of inhibitory interneurons that decreases synaptic plasticity. We obtained transcranial magnetic stimulation (TMS)-based measures of cortica

    Chemostratigraphy of Neoproterozoic carbonates: implications for 'blind dating'

    Get PDF
    The delta C-13(carb) and Sr-87/Sr-86 secular variations in Neoproteozoic seawater have been used for the purpose of 'isotope stratigraphy' but there are a number of problems that can preclude its routine use. In particular, it cannot be used with confidence for 'blind dating'. The compilation of isotopic data on carbonate rocks reveals a high level of inconsistency between various carbon isotope age curves constructed for Neoproteozoic seawater, caused by a relatively high frequency of both global and local delta C-13(carb) fluctuations combined with few reliable age determinations. Further complication is caused by the unresolved problem as to whether two or four glaciations, and associated negative delta C-13(carb) excursions, can be reliably documented. Carbon isotope stratigraphy cannot be used alone for geological correlation and 'blind dating'. Strontium isotope stratigraphy is a more reliable and precise tool for stratigraphic correlations and indirect age determinations. Combining strontium and carbon isotope stratigraphy, several discrete ages within the 590-544 Myr interval, and two age-groups at 660-610 and 740-690 Myr can be resolved

    Neutrinoless double-beta decay and seesaw mechanism

    Full text link
    From the standard seesaw mechanism of neutrino mass generation, which is based on the assumption that the lepton number is violated at a large (~10exp(+15) GeV) scale, follows that the neutrinoless double-beta decay is ruled by the Majorana neutrino mass mechanism. Within this notion, for the inverted neutrino-mass hierarchy we derive allowed ranges of half-lives of the neutrinoless double-beta decay for nuclei of experimental interest with different sets of nuclear matrix elements. The present-day results of the calculation of the neutrinoless double-beta decay nuclear matrix elements are briefly discussed. We argue that if neutrinoless double-beta decay will be observed in future experiments sensitive to the effective Majorana mass in the inverted mass hierarchy region, a comparison of the derived ranges with measured half-lives will allow us to probe the standard seesaw mechanism assuming that future cosmological data will establish the sum of neutrino masses to be about 0.2 eV.Comment: Some changes in sections I, II, IV, and V; two new figures; additional reference
    corecore