77,425 research outputs found

    The computer revolution in science: steps towards the realization of computer-supported discovery environments

    Get PDF
    The tools that scientists use in their search processes together form so-called discovery environments. The promise of artificial intelligence and other branches of computer science is to radically transform conventional discovery environments by equipping scientists with a range of powerful computer tools including large-scale, shared knowledge bases and discovery programs. We will describe the future computer-supported discovery environments that may result, and illustrate by means of a realistic scenario how scientists come to new discoveries in these environments. In order to make the step from the current generation of discovery tools to computer-supported discovery environments like the one presented in the scenario, developers should realize that such environments are large-scale sociotechnical systems. They should not just focus on isolated computer programs, but also pay attention to the question how these programs will be used and maintained by scientists in research practices. In order to help developers of discovery programs in achieving the integration of their tools in discovery environments, we will formulate a set of guidelines that developers could follow

    Social Machines

    No full text
    The term ā€˜social machineā€™ has recently been coined to refer to Web-based systems that support a variety of socially-relevant processes. Such systems (e.g., Wikipedia, Galaxy Zoo, Facebook, and reCAPTCHA) are progressively altering the way a broad array of social activities are performed, ranging from the way we communicate and transmit knowledge, establish romantic partnerships, generate ideas, produce goods and maintain friendships. They are also poised to deliver new kinds of intelligent processing capability by virtue of their ability to integrate the complementary contributions of both the human social environment and a global nexus of distributed computational resources. This chapter provides an overview of recent research into social machines. It examines what social machines are and discusses the kinds of social machines that currently exist. It also presents a range of issues that are the focus of current research attention within the Web Science community

    Optical tomography: Image improvement using mixed projection of parallel and fan beam modes

    Get PDF
    Mixed parallel and fan beam projection is a technique used to increase the quality images. This research focuses on enhancing the image quality in optical tomography. Image quality can be deļ¬ned by measuring the Peak Signal to Noise Ratio (PSNR) and Normalized Mean Square Error (NMSE) parameters. The ļ¬ndings of this research prove that by combining parallel and fan beam projection, the image quality can be increased by more than 10%in terms of its PSNR value and more than 100% in terms of its NMSE value compared to a single parallel beam

    Semantic data mining and linked data for a recommender system in the AEC industry

    Get PDF
    Even though it can provide design teams with valuable performance insights and enhance decision-making, monitored building data is rarely reused in an effective feedback loop from operation to design. Data mining allows users to obtain such insights from the large datasets generated throughout the building life cycle. Furthermore, semantic web technologies allow to formally represent the built environment and retrieve knowledge in response to domain-specific requirements. Both approaches have independently established themselves as powerful aids in decision-making. Combining them can enrich data mining processes with domain knowledge and facilitate knowledge discovery, representation and reuse. In this article, we look into the available data mining techniques and investigate to what extent they can be fused with semantic web technologies to provide recommendations to the end user in performance-oriented design. We demonstrate an initial implementation of a linked data-based system for generation of recommendations

    Course-based Science Research Promotes Learning in Diverse Students at Diverse Institutions

    Full text link
    Course-based research experiences (CREs) are powerful strategies for spreading learning and improving persistence for all students, both science majors and nonscience majors. Here we address the crucial components of CREs (context, discovery, ownership, iteration, communication, presentation) found across a broad range of such courses at a variety of academic institutions. We also address how the design of a CRE should vary according to the background of student participants; no single CRE format is perfect. We provide a framework for implementing CREs across multiple institutional types and several disciplines throughout the typical four years of undergraduate work, designed to a variety of student backgrounds. Our experiences implementing CREs also provide guidance on overcoming barriers to their implementation

    The relationship between knowledge management and innovation level in Mexican SMEs: Empirical evidence

    Get PDF
    The transformation of the current society from an industry-based economy to a knowledge management and innovation-based economy is changing the design and implementation of business strategies and the nature of the competition among the organizations which are mainly small and medium-size enterprises (SMEs). They struggle to survive in a market which is more demanding and competitive, so they seeknowledge management as one of the most effective strategies that may help to enable the innovation activities into the businesses. For these reasons, this research paper has as a main goal to analyze the relationship between knowledge management and innovation in Mexican SMEs. The empirical analysis used 125 manufacturing SMEs (each SME having from 20 to 250 employees) as a sample to be carried out. The obtained results indicate that knowledge management has a positive impact in products, process, and management systems innovation

    Learning Immune-Defectives Graph through Group Tests

    Full text link
    This paper deals with an abstraction of a unified problem of drug discovery and pathogen identification. Pathogen identification involves identification of disease-causing biomolecules. Drug discovery involves finding chemical compounds, called lead compounds, that bind to pathogenic proteins and eventually inhibit the function of the protein. In this paper, the lead compounds are abstracted as inhibitors, pathogenic proteins as defectives, and the mixture of "ineffective" chemical compounds and non-pathogenic proteins as normal items. A defective could be immune to the presence of an inhibitor in a test. So, a test containing a defective is positive iff it does not contain its "associated" inhibitor. The goal of this paper is to identify the defectives, inhibitors, and their "associations" with high probability, or in other words, learn the Immune Defectives Graph (IDG) efficiently through group tests. We propose a probabilistic non-adaptive pooling design, a probabilistic two-stage adaptive pooling design and decoding algorithms for learning the IDG. For the two-stage adaptive-pooling design, we show that the sample complexity of the number of tests required to guarantee recovery of the inhibitors, defectives, and their associations with high probability, i.e., the upper bound, exceeds the proposed lower bound by a logarithmic multiplicative factor in the number of items. For the non-adaptive pooling design too, we show that the upper bound exceeds the proposed lower bound by at most a logarithmic multiplicative factor in the number of items.Comment: Double column, 17 pages. Updated with tighter lower bounds and other minor edit
    • ā€¦
    corecore