1,407,644 research outputs found

    Interview with Peter Mertens and Wolfgang König: “From Reasonable Automation to (Sustainable) Autonomous Systems”

    Get PDF
    Peter Mertens is Professor Emeritus of Wirtschaftsinformatik at the Friedrich-Alexander-Universität (FAU) Erlangen-Nürnberg. After studying industrial engineering, he completed his doctoral studies and his habilitation at the TH Darmstadt (1961) and the TU München (1966), respectively. From 1966 to 1968, he worked for a large software and consulting firm in Switzerland, first as a system designer and later as a managing director. In 1968, Peter Mertens took over the first chaired professorship specialized in business data processing at the University of Linz. He is considered one of the founding fathers of Wirtschaftsinformatik in the German-speaking world. Until September 2005, Peter Mertens held the Chair of Business Administration, especially Wirtschaftsinformatik I at the Faculty of Business and Social Sciences of FAU. In parallel, he was head of the computer science research group “Business Applications” at FAU’s Faculty of Engineering. Since fall 2005, he works as an emeritus professor at his former chair. Peter Mertens is the author of numerous books, including 23 monographs. He has also been involved in the editing of 26 collective works. The first volume of his book “Integrated Information Processing” has been published in 18 editions. Some of his books have been translated into English, Chinese, Italian, and Russian. Among other awards, he is a Fellow of the German Informatics Society, an honorary doctor of five universities in Germany, Austria, and Switzerland, and has been awarded the Order of Merit of the Federal Republic of Germany. From 1990 until 2000, Peter Mertens served as Editor-in-Chief for WIRTSCHAFTSINFORMATIK (now: BISE). Until 2016, Wolfgang König was Professor of Business Administration, especially Information Systems and Information Management at the Faculty of Economics and Business Administration of Goethe University Frankfurt a. M., and until January 2022, he was Chairman of the E-Finance Lab (since 2020: efl – the Data Science Institute) at Goethe University. Since 2008, he holds the position of Executive Director of the House of Finance of Goethe University, and since 2016, he serves as Senior Professor at Goethe University. From 1998 until 2008, König served as Editor-in-Chief for WIRTSCHAFTSINFORMATIK (now: BISE). Both Peter Mertens and Wolfgang König are clearly among the research pioneers when it comes to automated systems, which can be seen as a precursor of the central topic of this special issue: autonomous systems (AS). The key difference between automated systems and AS is that, in AS, machines or other technology actors have at least some agency (i.e., they can act autonomously), whereas in automated systems, the agency still lies with humans – who, for example, define the relevant rule system – and machines/technologies merely automate the execution of these predefined rules

    Natural language processing

    Get PDF
    Beginning with the basic issues of NLP, this chapter aims to chart the major research activities in this area since the last ARIST Chapter in 1996 (Haas, 1996), including: (i) natural language text processing systems - text summarization, information extraction, information retrieval, etc., including domain-specific applications; (ii) natural language interfaces; (iii) NLP in the context of www and digital libraries ; and (iv) evaluation of NLP systems

    Requirements modelling and formal analysis using graph operations

    Get PDF
    The increasing complexity of enterprise systems requires a more advanced analysis of the representation of services expected than is currently possible. Consequently, the specification stage, which could be facilitated by formal verification, becomes very important to the system life-cycle. This paper presents a formal modelling approach, which may be used in order to better represent the reality of the system and to verify the awaited or existing system’s properties, taking into account the environmental characteristics. For that, we firstly propose a formalization process based upon properties specification, and secondly we use Conceptual Graphs operations to develop reasoning mechanisms of verifying requirements statements. The graphic visualization of these reasoning enables us to correctly capture the system specifications by making it easier to determine if desired properties hold. It is applied to the field of Enterprise modelling

    Gibbs entropy and irreversible thermodynamics

    Full text link
    Recently a number of approaches has been developed to connect the microscopic dynamics of particle systems to the macroscopic properties of systems in nonequilibrium stationary states, via the theory of dynamical systems. This way a direct connection between dynamics and Irreversible Thermodynamics has been claimed to have been found. However, the main quantity used in these studies is a (coarse-grained) Gibbs entropy, which to us does not seem suitable, in its present form, to characterize nonequilibrium states. Various simplified models have also been devised to give explicit examples of how the coarse-grained approach may succeed in giving a full description of the Irreversible Thermodynamics. We analyze some of these models pointing out a number of difficulties which, in our opinion, need to be overcome in order to establish a physically relevant connection between these models and Irreversible Thermodynamics.Comment: 19 pages, 4 eps figures, LaTeX2

    Generating indicative-informative summaries with SumUM

    Get PDF
    We present and evaluate SumUM, a text summarization system that takes a raw technical text as input and produces an indicative informative summary. The indicative part of the summary identifies the topics of the document, and the informative part elaborates on some of these topics according to the reader's interest. SumUM motivates the topics, describes entities, and defines concepts. It is a first step for exploring the issue of dynamic summarization. This is accomplished through a process of shallow syntactic and semantic analysis, concept identification, and text regeneration. Our method was developed through the study of a corpus of abstracts written by professional abstractors. Relying on human judgment, we have evaluated indicativeness, informativeness, and text acceptability of the automatic summaries. The results thus far indicate good performance when compared with other summarization technologies

    Information transfer in signaling pathways : a study using coupled simulated and experimental data

    Get PDF
    Background: The topology of signaling cascades has been studied in quite some detail. However, how information is processed exactly is still relatively unknown. Since quite diverse information has to be transported by one and the same signaling cascade (e.g. in case of different agonists), it is clear that the underlying mechanism is more complex than a simple binary switch which relies on the mere presence or absence of a particular species. Therefore, finding means to analyze the information transferred will help in deciphering how information is processed exactly in the cell. Using the information-theoretic measure transfer entropy, we studied the properties of information transfer in an example case, namely calcium signaling under different cellular conditions. Transfer entropy is an asymmetric and dynamic measure of the dependence of two (nonlinear) stochastic processes. We used calcium signaling since it is a well-studied example of complex cellular signaling. It has been suggested that specific information is encoded in the amplitude, frequency and waveform of the oscillatory Ca2+-signal. Results: We set up a computational framework to study information transfer, e.g. for calcium signaling at different levels of activation and different particle numbers in the system. We stochastically coupled simulated and experimentally measured calcium signals to simulated target proteins and used kernel density methods to estimate the transfer entropy from these bivariate time series. We found that, most of the time, the transfer entropy increases with increasing particle numbers. In systems with only few particles, faithful information transfer is hampered by random fluctuations. The transfer entropy also seems to be slightly correlated to the complexity (spiking, bursting or irregular oscillations) of the signal. Finally, we discuss a number of peculiarities of our approach in detail. Conclusion: This study presents the first application of transfer entropy to biochemical signaling pathways. We could quantify the information transferred from simulated/experimentally measured calcium signals to a target enzyme under different cellular conditions. Our approach, comprising stochastic coupling and using the information-theoretic measure transfer entropy, could also be a valuable tool for the analysis of other signaling pathways

    High-Temperature Expansions of Bures and Fisher Information Priors

    Full text link
    For certain infinite and finite-dimensional thermal systems, we obtain --- incorporating quantum-theoretic considerations into Bayesian thermostatistical investigations of Lavenda --- high-temperature expansions of priors over inverse temperature beta induced by volume elements ("quantum Jeffreys' priors) of Bures metrics. Similarly to Lavenda's results based on volume elements (Jeffreys' priors) of (classical) Fisher information metrics, we find that in the limit beta -> 0, the quantum-theoretic priors either conform to Jeffreys' rule for variables over [0,infinity], by being proportional to 1/beta, or to the Bayes-Laplace principle of insufficient reason, by being constant. Whether a system adheres to one rule or to the other appears to depend upon its number of degrees of freedom.Comment: Six pages, LaTeX. The title has been shortened (and then further modified), at the suggestion of a colleague. Other minor change

    Understanding spatial data usability

    Get PDF
    In recent geographical information science literature, a number of researchers have made passing reference to an apparently new characteristic of spatial data known as 'usability'. While this attribute is well-known to professionals engaged in software engineering and computer interface design and testing, extension of the concept to embrace information would seem to be a new development. Furthermore, while notions such as the use and value of spatial information, and the diffusion of spatial information systems, have been the subject of research since the late-1980s, the current references to usability clearly represent something which extends well beyond that initial research. Accordingly, the purposes of this paper are: (1) to understand what is meant by spatial data usability; (2) to identify the elements that might comprise usability; and (3) to consider what the related research questions might be
    • …
    corecore