740 research outputs found

    Data Provenance and Management in Radio Astronomy: A Stream Computing Approach

    Get PDF
    New approaches for data provenance and data management (DPDM) are required for mega science projects like the Square Kilometer Array, characterized by extremely large data volume and intense data rates, therefore demanding innovative and highly efficient computational paradigms. In this context, we explore a stream-computing approach with the emphasis on the use of accelerators. In particular, we make use of a new generation of high performance stream-based parallelization middleware known as InfoSphere Streams. Its viability for managing and ensuring interoperability and integrity of signal processing data pipelines is demonstrated in radio astronomy. IBM InfoSphere Streams embraces the stream-computing paradigm. It is a shift from conventional data mining techniques (involving analysis of existing data from databases) towards real-time analytic processing. We discuss using InfoSphere Streams for effective DPDM in radio astronomy and propose a way in which InfoSphere Streams can be utilized for large antennae arrays. We present a case-study: the InfoSphere Streams implementation of an autocorrelating spectrometer, and using this example we discuss the advantages of the stream-computing approach and the utilization of hardware accelerators

    Agent-Based Team Aiding in a Time Critical Task

    No full text
    In this paper we evaluate the effectiveness of agent-based aiding in support of a time-critical team-planning task for teams of both humans and heterogeneous software agents. The team task consists of human subjects playing the role of military commanders and cooperatively planning to move their respective units to a common rendezvous point, given time and resource constraints. The objective of the experiment was to compare the effectiveness of agent-based aiding for individual and team tasks as opposed to the baseline condition of manual route planning. There were two experimental conditions: the Aided condition, where a Route Planning Agent (RPA) finds a least cost plan between the start and rendezvous points for a given composition of force units; and the Baseline condition, where the commanders determine initial routes manually, and receive basic feedback about the route. We demonstrate that the Aided condition provides significantly better assistance for individual route planning and team-based re-planning

    Information and communication in a networked infosphere: a review of concepts and application in social branding

    Get PDF
    This paper aims at providing a contribution to the comprehensive review of the impact of information and communication, and their supporting technologies, in the current transformation of human life in the infosphere. The paper also offers an ex- ample of the power of new social approaches to the use of information and commu- nication technologies to foster new working models in organizations by presenting the main outcomes of a research project on social branding. A discussion about some trends of the future impact of new information and communication technologies in the infosphere is also included

    Turing's three philosophical lessons and the philosophy of information

    Get PDF
    In this article, I outline the three main philosophical lessons that we may learn from Turing's work, and how they lead to a new philosophy of information. After a brief introduction, I discuss his work on the method of levels of abstraction (LoA), and his insistence that questions could be meaningfully asked only by specifying the correct LoA. I then look at his second lesson, about the sort of philosophical questions that seem to be most pressing today. Finally, I focus on the third lesson, concerning the new philosophical anthropology that owes so much to Turing's work. I then show how the lessons are learned by the philosophy of information. In the conclusion, I draw a general synthesis of the points made, in view of the development of the philosophy of information itself as a continuation of Turing's work. This journal is © 2012 The Royal Society.Peer reviewe

    Still minding the gap? Reflecting on transitions between concepts of information in varied domains

    Get PDF
    This conceptual paper, a contribution to the tenth anniversary special issue of information, gives a cross-disciplinary review of general and unified theories of information. A selective literature review is used to update a 2013 article on bridging the gaps between conceptions of information in different domains, including material from the physical and biological sciences, from the humanities and social sciences including library and information science, and from philosophy. A variety of approaches and theories are reviewed, including those of Brenner, Brier, Burgin and Wu, Capurro, Cárdenas-García and Ireland, Hidalgo, Hofkirchner, Kolchinsky and Wolpert, Floridi, Mingers and Standing, Popper, and Stonier. The gaps between disciplinary views of information remain, although there has been progress, and increasing interest, in bridging them. The solution is likely to be either a general theory of sufficient flexibility to cope with multiple meanings of information, or multiple and distinct theories for different domains, but with a complementary nature, and ideally boundary spanning concepts

    BigExcel: A Web-Based Framework for Exploring Big Data in Social Sciences

    Get PDF
    This paper argues that there are three fundamental challenges that need to be overcome in order to foster the adoption of big data technologies in non-computer science related disciplines: addressing issues of accessibility of such technologies for non-computer scientists, supporting the ad hoc exploration of large data sets with minimal effort and the availability of lightweight web-based frameworks for quick and easy analytics. In this paper, we address the above three challenges through the development of 'BigExcel', a three tier web-based framework for exploring big data to facilitate the management of user interactions with large data sets, the construction of queries to explore the data set and the management of the infrastructure. The feasibility of BigExcel is demonstrated through two Yahoo Sandbox datasets. The first dataset is the Yahoo Buzz Score data set we use for quantitatively predicting trending technologies and the second is the Yahoo n-gram corpus we use for qualitatively inferring the coverage of important events. A demonstration of the BigExcel framework and source code is available at http://bigdata.cs.st-andrews.ac.uk/projects/bigexcel-exploring-big-data-for-social-sciences/.Comment: 8 page

    Information and Design: Book Symposium on Luciano Floridi’s The Logic of Information

    Get PDF
    Purpose – To review and discuss Luciano Floridi’s 2019 book The Logic of Information: A Theory of Philosophy as Conceptual Design, the latest instalment in his philosophy of information (PI) tetralogy, particularly with respect to its implications for library and information studies (LIS). Design/methodology/approach – Nine scholars with research interests in philosophy and LIS read and responded to the book, raising critical and heuristic questions in the spirit of scholarly dialogue. Floridi responded to these questions. Findings – Floridi’s PI, including this latest publication, is of interest to LIS scholars, and much insight can be gained by exploring this connection. It seems also that LIS has the potential to contribute to PI’s further development in some respects. Research implications – Floridi’s PI work is technical philosophy for which many LIS scholars do not have the training or patience to engage with, yet doing so is rewarding. This suggests a role for translational work between philosophy and LIS. Originality/value – The book symposium format, not yet seen in LIS, provides forum for sustained, multifaceted and generative dialogue around ideas

    Data provenance and management in radio astronomy: a stream computing approach

    Get PDF
    New approaches for data provenance and data management (DPDM) are required for mega science projects like the Square Kilometer Array, characterized by extremely large data volume and intense data rates, therefore demanding innovative and highly efficient computational paradigms. In this context, we explore a stream-computing approach with the emphasis on the use of accelerators. In particular, we make use of a new generation of high performance stream-based parallelization middleware known as InfoSphere Streams. Its viability for managing and ensuring interoperability and integrity of signal processing data pipelines is demonstrated in radio astronomy. IBM InfoSphere Streams embraces the stream-computing paradigm. It is a shift from conventional data mining techniques (involving analysis of existing data from databases) towards real-time analytic processing. We discuss using InfoSphere Streams for effective DPDM in radio astronomy and propose a way in which InfoSphere Streams can be utilized for large antennae arrays. We present a case-study: the InfoSphere Streams implementation of an autocorrelating spectrometer, and using this example we discuss the advantages of the stream-computing approach and the utilization of hardware accelerators
    • …
    corecore