284,249 research outputs found

    Scheduling science on television: A comparative analysis of the representations of science in 11 European countries

    Get PDF
    While science-in-the-media is a useful vehicle for understanding the media, few scholars have used it that way: instead, they look at science-in-the-media as a way of understanding science-in-the-media and often end up attributing characteristics to science-in-the-media that are simply characteristics of the media, rather than of the science they see there. This point of view was argued by Jane Gregory and Steve Miller in 1998 in Science in Public. Science, they concluded, is not a special case in the mass media, understanding science-in-the-media is mostly about understanding the media (Gregory and Miller, 1998: 105). More than a decade later, research that looks for patterns or even determinants of science-in-the-media, be it in press or electronic media, is still very rare. There is interest in explaining the media’s selection of science content from a media perspective. Instead, the search for, and analysis of, several kinds of distortions in media representations of science have been leading topics of science-in-the-media research since its beginning in the USA at the end of the 1960s and remain influential today (see Lewenstein, 1994; Weigold, 2001; Kohring, 2005 for summaries). Only a relatively small amount of research has been conducted seeking to identify factors relevant to understanding how science is treated by the mass media in general and by television in particular. The current study addresses the lack of research in this area. Our research seeks to explore which constraints national media systems place on the volume and structure of science programming in television. In simpler terms, the main question this study is trying to address is why science-in-TV in Europe appears as it does. We seek to link research focussing on the detailed analysis of science representations on television (Silverstone, 1984; Collins, 1987; Hornig, 1990; Leon, 2008), and media research focussing on the historical genesis and current political regulation of national media systems (see for instance Hallin and Mancini, 2004; Napoli, 2004; Open Society Institute, 2005, 2008). The former studies provide deeper insights into the selection and reconstruction of scientific subject matters, which reflect and – at the same time – reinforce popular images of science. But their studies do not give much attention to production constraints or other relevant factors which could provide an insight into why media treat science as they do. The latter scholars inter alia shed light on distinct media policies in Europe which significantly influence national channel patterns. However, they do not refer to clearly defined content categories but to fairly rough distinctions such as information versus entertainment or fictional versus factual. Accordingly, we know more about historical roots and current practices of media regulation across Europe than we do about the effects of these different regimes on the provision of specific content in European societies

    An M-QAM Signal Modulation Recognition Algorithm in AWGN Channel

    Full text link
    Computing the distinct features from input data, before the classification, is a part of complexity to the methods of Automatic Modulation Classification (AMC) which deals with modulation classification was a pattern recognition problem. Although the algorithms that focus on MultiLevel Quadrature Amplitude Modulation (M-QAM) which underneath different channel scenarios was well detailed. A search of the literature revealed indicates that few studies were done on the classification of high order M-QAM modulation schemes like128-QAM, 256-QAM, 512-QAM and1024-QAM. This work is focusing on the investigation of the powerful capability of the natural logarithmic properties and the possibility of extracting Higher-Order Cumulant's (HOC) features from input data received raw. The HOC signals were extracted under Additive White Gaussian Noise (AWGN) channel with four effective parameters which were defined to distinguished the types of modulation from the set; 4-QAM~1024-QAM. This approach makes the recognizer more intelligent and improves the success rate of classification. From simulation results, which was achieved under statistical models for noisy channels, manifest that recognized algorithm executes was recognizing in M-QAM, furthermore, most results were promising and showed that the logarithmic classifier works well over both AWGN and different fading channels, as well as it can achieve a reliable recognition rate even at a lower signal-to-noise ratio (less than zero), it can be considered as an Integrated Automatic Modulation Classification (AMC) system in order to identify high order of M-QAM signals that applied a unique logarithmic classifier, to represents higher versatility, hence it has a superior performance via all previous works in automatic modulation identification systemComment: 18 page

    Science on television : how? Like that!

    Get PDF
    This study explores the presence of science programs on the Flemish public broadcaster between 1997 and 2002 in terms of length, science domains, target groups, production mode, and type of broadcast. Our data show that for nearly all variables 2000 can be marked as a year in which the downward spiral for science on television was reversed. These results serve as a case study to discuss the influence of public policy and other possible motives for changes in science programming, as to gain a clearer insight into the factors that influence whether and how science programs are broadcast on television. Three factors were found to be crucial in this respect: 1) public service philosophy, 2) a strong governmental science policy providing structural government support, and 3) the reflection of a social discourse that articulates a need for more hard sciences

    Sketch of Big Data Real-Time Analytics Model

    Get PDF
    Big Data has drawn huge attention from researchers in information sciences, decision makers in governments and enterprises. However, there is a lot of potential and highly useful value hidden in the huge volume of data. Data is the new oil, but unlike oil data can be refined further to create even more value. Therefore, a new scientific paradigm is born as data-intensive scientific discovery, also known as Big Data. The growth volume of real-time data requires new techniques and technologies to discover insight value. In this paper we introduce the Big Data real-time analytics model as a new technique. We discuss and compare several Big Data technologies for real-time processing along with various challenges and issues in adapting Big Data. Real-time Big Data analysis based on cloud computing approach is our future research direction

    Assessing partnership alternatives in an IT network employing analytical methods

    Get PDF
    One of the main critical success factors for the companies is their ability to build and maintain an effective collaborative network. This is more critical in the IT industry where the development of sustainable competitive advantage requires an integration of various resources, platforms, and capabilities provided by various actors. Employing such a collaborative network will dramatically change the operations management and promote flexibility and agility. Despite its importance, there is a lack of an analytical tool on collaborative network building process. In this paper, we propose an optimization model employing AHP and multiobjective programming for collaborative network building process based on two interorganizational relationships’ theories, namely, (i) transaction cost theory and (ii) resource-based view, which are representative of short-term and long-term considerations. The five different methods were employed to solve the formulation and their performances were compared. The model is implemented in an IT company who was in process of developing a large-scale enterprise resource planning (ERP) system. The results show that the collaborative network formed through this selection process was more efficient in terms of cost, time, and development speed. The framework offers novel theoretical underpinning and analytical solutions and can be used as an effective tool in selecting network alternatives

    Evaluating the Relationship Between Running Times and DNA Sequence Sizes using a Generic-Based Filtering Program.

    Get PDF
    Generic programming depends on the decomposition of programs into simpler components which may be developed separately and combined arbitrarily, subject only to well- defined interfaces. Bioinformatics deals with the application of computational techniques to data present in the Biological sciences. A genetic sequence is a succession of letters which represents the basic structure of a hypothetical DNA molecule, with the capacity to carry information. This research article studied the relationship between the running times of a generic-based filtering program and different samples of genetic sequences in an increasing order of magnitude. A graphical result was obtained to adequately depict this relationship. It was also discovered that the complexity of the generic tree program was O (log2 N). This research article provided one of the systematic approaches of generic programming to Bioinformatics, which could be instrumental in elucidating major discoveries in Bioinformatics, as regards efficient data management and analysis

    Automating embedded analysis capabilities and managing software complexity in multiphysics simulation part I: template-based generic programming

    Full text link
    An approach for incorporating embedded simulation and analysis capabilities in complex simulation codes through template-based generic programming is presented. This approach relies on templating and operator overloading within the C++ language to transform a given calculation into one that can compute a variety of additional quantities that are necessary for many state-of-the-art simulation and analysis algorithms. An approach for incorporating these ideas into complex simulation codes through general graph-based assembly is also presented. These ideas have been implemented within a set of packages in the Trilinos framework and are demonstrated on a simple problem from chemical engineering

    Python bindings for the open source electromagnetic simulator Meep

    Get PDF
    Meep is a broadly used open source package for finite-difference time-domain electromagnetic simulations. Python bindings for Meep make it easier to use for researchers and open promising opportunities for integration with other packages in the Python ecosystem. As this project shows, implementing Python-Meep offers benefits for specific disciplines and for the wider research community
    • 

    corecore