1,262 research outputs found

    The Occultation of Polarised Light from Stellar Envelopes

    Get PDF
    Following the discovery of the spectral variability in a number of emission line B type stars (so called Be stars) at around the turn of this century, extensive theoretical and observational campaigns have since been an ongoing process in an attempt to understand the controlling physics of the star and its environment. Although it is now known that variability occurs on all time scales the mechanisms responsible for the variability are still not well understood. The consensus is that the stars are rapidly rotating (approximately up to 80% of the critical rotational velocity) which appears to have a major influence upon the characteristics of the star and its environment as is inferred by the observed spectral variability. The observed high intrinsic polarisation of Be stars also implies that the rapidly rotating star rotationally distorts the surrounding circumstellar envelope. This suggests that by using current geometrical model envelopes together with polarisation theory, constraints upon the distribution of scattering material, the geometry and mass of the scattering envelope could be inferred from polarimetric observations. In this thesis the theory of optically thin, Thomson (or Rayleigh) scattering polarisation from stellar envelopes for both single and binary star systems (Brown and McLean, 1977; Brown et al., 1978) are considered and extended to include finite light sources in order to provide more stringent constraints upon Be star envelopes and also to enable inferences to be made as to the density structure of regular polarimetric variations in single and binary star systems. The spectral variability characterising the Be star phenonena is reviewed in chapter 1 with particular reference to gamma Cas. Current spectroscopic geometric models are also discussed in some detail. Following this, the polarimetric theory and observations related to Be stars are discussed which also includes a brief section on binary diagnostics as Be stars are frequently observed in binary systems and a qualitative account of the observational consequences of scatterer occultation in binary systems is also included. Continuing with polarimetric variability in binary systems, in chapter 2 the polarimetric variability of the Be/X-ray transient A0538-66 is investigated with a view to understanding the mass transfer from the primary (Be) star disc envelope to the secondary (neutron) star. This chapter is in a slightly different vein to the remainder of the thesis in that the scattering material is assumed not to be occulted (in fact due to the scanty data set this question cannot even be asked). It is included here partially because Be stars in binary systems seem to exhibit spectral behaviour similar to that of single stars and hence this may throw some light upon possible underlying mechanisms and also as an example of how the density structure within a binary system can be inferred from the polarimetric data which is one of the questions addressed in this thesis. In chapters 3-5 the effects of incorporating a finite size (spherical) light source into the optically thin, single Thomson scattering polarimetric theory is developed for various geometrical models with particular reference to understanding what constraints are needed to be imposed upon Be star models in order to produce the necessary degree of observed intrinsic polarisation. Also addressed is the question to what extent the density structure can be inferred from the polarimetric variability of a system when there is only one important light source. This may be considered as a density perturbation in a stellar envelope around a single star or a binary system in which the secondary is important as a light source (eg. a neutron star). In chapter 6 the effects of occultation in obliquely rotating envelopes are discussed and the error in inferring the inclination and obliquity angle by Fourier analysing the polarimetric data of such systems is assessed when no account of occultation is made but is present within the data. Finally in chapter 7 a brief summary of the conclusions of this thesis is made and suggestions are put forward for future work with particular reference to the application of polarisation theory, as an independent method, to understanding the underlying structure of UV discrete absorption line components is discussed

    Involvement of Industry in the National High Performance Computing and Communication Enterprise

    Get PDF
    We discuss aspects of a national computer science agenda for High Performance Computing and Communications (HPCC). We agree with the general direction and emphasis of the current program. In particular, the strong experimental component and linkage of applications with computer science should be continued. We recommend accelerating the emphasis on national challenges with more applications and technologies from the information, as compared to simulation areas. We suggest modifying the grand challenge concept to complement the current teaming of particular computer science and applications researchers. We would emphasize better linking of each application group to the entire (inter) national computer science activity. We express this in terms of a virtual corporation metaphor. The same approach can be used to involve industry in HPCC for both the consumers of HPCC technology (application industries) and producers---Independent Software Vendors (ISV) and the hardware system companies. We illustrate this approach with InfoMall, a HPCC technology transfer program funded by New York State. The federal program should have greater incentives for the involvement of both ISV’s and their products

    An Application Perspective on High-Performance Computing and Communications

    Get PDF
    We review possible and probable industrial applications of HPCC focusing on the software and hardware issues. Thirty-three separate categories are illustrated by detailed descriptions of five areas -- computational chemistry; Monte Carlo methods from physics to economics; manufacturing; and computational fluid dynamics; command and control; or crisis management; and multimedia services to client computers and settop boxes. The hardware varies from tightly-coupled parallel supercomputers to heterogeneous distributed systems. The software models span HPF and data parallelism, to distributed information systems and object/data flow parallelism on the Web. We find that in each case, it is reasonably clear that HPCC works in principle, and postulate that this knowledge can be used in a new generation of software infrastructure based on the WebWindows approach, and discussed in an accompanying paper

    Understanding ML driven HPC: Applications and Infrastructure

    Full text link
    We recently outlined the vision of "Learning Everywhere" which captures the possibility and impact of how learning methods and traditional HPC methods can be coupled together. A primary driver of such coupling is the promise that Machine Learning (ML) will give major performance improvements for traditional HPC simulations. Motivated by this potential, the ML around HPC class of integration is of particular significance. In a related follow-up paper, we provided an initial taxonomy for integrating learning around HPC methods. In this paper, which is part of the Learning Everywhere series, we discuss "how" learning methods and HPC simulations are being integrated to enhance effective performance of computations. This paper identifies several modes --- substitution, assimilation, and control, in which learning methods integrate with HPC simulations and provide representative applications in each mode. This paper discusses some open research questions and we hope will motivate and clear the ground for MLaroundHPC benchmarks.Comment: Invited talk to "Visionary Track" at IEEE eScience 2019. arXiv admin note: text overlap with arXiv:1806.04731 by other author

    A Tale of Two Data-Intensive Paradigms: Applications, Abstractions, and Architectures

    Full text link
    Scientific problems that depend on processing large amounts of data require overcoming challenges in multiple areas: managing large-scale data distribution, co-placement and scheduling of data with compute resources, and storing and transferring large volumes of data. We analyze the ecosystems of the two prominent paradigms for data-intensive applications, hereafter referred to as the high-performance computing and the Apache-Hadoop paradigm. We propose a basis, common terminology and functional factors upon which to analyze the two approaches of both paradigms. We discuss the concept of "Big Data Ogres" and their facets as means of understanding and characterizing the most common application workloads found across the two paradigms. We then discuss the salient features of the two paradigms, and compare and contrast the two approaches. Specifically, we examine common implementation/approaches of these paradigms, shed light upon the reasons for their current "architecture" and discuss some typical workloads that utilize them. In spite of the significant software distinctions, we believe there is architectural similarity. We discuss the potential integration of different implementations, across the different levels and components. Our comparison progresses from a fully qualitative examination of the two paradigms, to a semi-quantitative methodology. We use a simple and broadly used Ogre (K-means clustering), characterize its performance on a range of representative platforms, covering several implementations from both paradigms. Our experiments provide an insight into the relative strengths of the two paradigms. We propose that the set of Ogres will serve as a benchmark to evaluate the two paradigms along different dimensions.Comment: 8 pages, 2 figure

    Java for parallel computing and as a general language for scientific and engineering simulation and modeling

    Get PDF
    We discuss the role of Java and Web technologies for general simulation. We classify the classes of concurrency typical in problems and analyze separately the role of Java in user interfaces, coarse grain software integration, and detailed computational kernels. We conclude that Java could become a major language for computational science, as it potentially offers good performance, excellent user interfaces, and the advantages of object-oriented structure
    corecore