287,655 research outputs found

    Scheduling science on television: A comparative analysis of the representations of science in 11 European countries

    Get PDF
    While science-in-the-media is a useful vehicle for understanding the media, few scholars have used it that way: instead, they look at science-in-the-media as a way of understanding science-in-the-media and often end up attributing characteristics to science-in-the-media that are simply characteristics of the media, rather than of the science they see there. This point of view was argued by Jane Gregory and Steve Miller in 1998 in Science in Public. Science, they concluded, is not a special case in the mass media, understanding science-in-the-media is mostly about understanding the media (Gregory and Miller, 1998: 105). More than a decade later, research that looks for patterns or even determinants of science-in-the-media, be it in press or electronic media, is still very rare. There is interest in explaining the media’s selection of science content from a media perspective. Instead, the search for, and analysis of, several kinds of distortions in media representations of science have been leading topics of science-in-the-media research since its beginning in the USA at the end of the 1960s and remain influential today (see Lewenstein, 1994; Weigold, 2001; Kohring, 2005 for summaries). Only a relatively small amount of research has been conducted seeking to identify factors relevant to understanding how science is treated by the mass media in general and by television in particular. The current study addresses the lack of research in this area. Our research seeks to explore which constraints national media systems place on the volume and structure of science programming in television. In simpler terms, the main question this study is trying to address is why science-in-TV in Europe appears as it does. We seek to link research focussing on the detailed analysis of science representations on television (Silverstone, 1984; Collins, 1987; Hornig, 1990; Leon, 2008), and media research focussing on the historical genesis and current political regulation of national media systems (see for instance Hallin and Mancini, 2004; Napoli, 2004; Open Society Institute, 2005, 2008). The former studies provide deeper insights into the selection and reconstruction of scientific subject matters, which reflect and – at the same time – reinforce popular images of science. But their studies do not give much attention to production constraints or other relevant factors which could provide an insight into why media treat science as they do. The latter scholars inter alia shed light on distinct media policies in Europe which significantly influence national channel patterns. However, they do not refer to clearly defined content categories but to fairly rough distinctions such as information versus entertainment or fictional versus factual. Accordingly, we know more about historical roots and current practices of media regulation across Europe than we do about the effects of these different regimes on the provision of specific content in European societies

    Science on television : how? Like that!

    Get PDF
    This study explores the presence of science programs on the Flemish public broadcaster between 1997 and 2002 in terms of length, science domains, target groups, production mode, and type of broadcast. Our data show that for nearly all variables 2000 can be marked as a year in which the downward spiral for science on television was reversed. These results serve as a case study to discuss the influence of public policy and other possible motives for changes in science programming, as to gain a clearer insight into the factors that influence whether and how science programs are broadcast on television. Three factors were found to be crucial in this respect: 1) public service philosophy, 2) a strong governmental science policy providing structural government support, and 3) the reflection of a social discourse that articulates a need for more hard sciences

    An M-QAM Signal Modulation Recognition Algorithm in AWGN Channel

    Full text link
    Computing the distinct features from input data, before the classification, is a part of complexity to the methods of Automatic Modulation Classification (AMC) which deals with modulation classification was a pattern recognition problem. Although the algorithms that focus on MultiLevel Quadrature Amplitude Modulation (M-QAM) which underneath different channel scenarios was well detailed. A search of the literature revealed indicates that few studies were done on the classification of high order M-QAM modulation schemes like128-QAM, 256-QAM, 512-QAM and1024-QAM. This work is focusing on the investigation of the powerful capability of the natural logarithmic properties and the possibility of extracting Higher-Order Cumulant's (HOC) features from input data received raw. The HOC signals were extracted under Additive White Gaussian Noise (AWGN) channel with four effective parameters which were defined to distinguished the types of modulation from the set; 4-QAM~1024-QAM. This approach makes the recognizer more intelligent and improves the success rate of classification. From simulation results, which was achieved under statistical models for noisy channels, manifest that recognized algorithm executes was recognizing in M-QAM, furthermore, most results were promising and showed that the logarithmic classifier works well over both AWGN and different fading channels, as well as it can achieve a reliable recognition rate even at a lower signal-to-noise ratio (less than zero), it can be considered as an Integrated Automatic Modulation Classification (AMC) system in order to identify high order of M-QAM signals that applied a unique logarithmic classifier, to represents higher versatility, hence it has a superior performance via all previous works in automatic modulation identification systemComment: 18 page

    Assessing partnership alternatives in an IT network employing analytical methods

    Get PDF
    One of the main critical success factors for the companies is their ability to build and maintain an effective collaborative network. This is more critical in the IT industry where the development of sustainable competitive advantage requires an integration of various resources, platforms, and capabilities provided by various actors. Employing such a collaborative network will dramatically change the operations management and promote flexibility and agility. Despite its importance, there is a lack of an analytical tool on collaborative network building process. In this paper, we propose an optimization model employing AHP and multiobjective programming for collaborative network building process based on two interorganizational relationships’ theories, namely, (i) transaction cost theory and (ii) resource-based view, which are representative of short-term and long-term considerations. The five different methods were employed to solve the formulation and their performances were compared. The model is implemented in an IT company who was in process of developing a large-scale enterprise resource planning (ERP) system. The results show that the collaborative network formed through this selection process was more efficient in terms of cost, time, and development speed. The framework offers novel theoretical underpinning and analytical solutions and can be used as an effective tool in selecting network alternatives

    Sketch of Big Data Real-Time Analytics Model

    Get PDF
    Big Data has drawn huge attention from researchers in information sciences, decision makers in governments and enterprises. However, there is a lot of potential and highly useful value hidden in the huge volume of data. Data is the new oil, but unlike oil data can be refined further to create even more value. Therefore, a new scientific paradigm is born as data-intensive scientific discovery, also known as Big Data. The growth volume of real-time data requires new techniques and technologies to discover insight value. In this paper we introduce the Big Data real-time analytics model as a new technique. We discuss and compare several Big Data technologies for real-time processing along with various challenges and issues in adapting Big Data. Real-time Big Data analysis based on cloud computing approach is our future research direction

    SPH-EXA: Enhancing the Scalability of SPH codes Via an Exascale-Ready SPH Mini-App

    Full text link
    Numerical simulations of fluids in astrophysics and computational fluid dynamics (CFD) are among the most computationally-demanding calculations, in terms of sustained floating-point operations per second, or FLOP/s. It is expected that these numerical simulations will significantly benefit from the future Exascale computing infrastructures, that will perform 10^18 FLOP/s. The performance of the SPH codes is, in general, adversely impacted by several factors, such as multiple time-stepping, long-range interactions, and/or boundary conditions. In this work an extensive study of three SPH implementations SPHYNX, ChaNGa, and XXX is performed, to gain insights and to expose any limitations and characteristics of the codes. These codes are the starting point of an interdisciplinary co-design project, SPH-EXA, for the development of an Exascale-ready SPH mini-app. We implemented a rotating square patch as a joint test simulation for the three SPH codes and analyzed their performance on a modern HPC system, Piz Daint. The performance profiling and scalability analysis conducted on the three parent codes allowed to expose their performance issues, such as load imbalance, both in MPI and OpenMP. Two-level load balancing has been successfully applied to SPHYNX to overcome its load imbalance. The performance analysis shapes and drives the design of the SPH-EXA mini-app towards the use of efficient parallelization methods, fault-tolerance mechanisms, and load balancing approaches.Comment: arXiv admin note: substantial text overlap with arXiv:1809.0801

    Automating embedded analysis capabilities and managing software complexity in multiphysics simulation part I: template-based generic programming

    Full text link
    An approach for incorporating embedded simulation and analysis capabilities in complex simulation codes through template-based generic programming is presented. This approach relies on templating and operator overloading within the C++ language to transform a given calculation into one that can compute a variety of additional quantities that are necessary for many state-of-the-art simulation and analysis algorithms. An approach for incorporating these ideas into complex simulation codes through general graph-based assembly is also presented. These ideas have been implemented within a set of packages in the Trilinos framework and are demonstrated on a simple problem from chemical engineering
    • 

    corecore