306 research outputs found
Recommended from our members
Exploiting CAFS-ISP
In the summer of 1982, the ICLCUA CAFS Special Interest Group defined three subject areas for working party activity. These were: 1) interfaces with compilers and databases, 2) end-user language facilities and display methods, and 3) text-handling and office automation. The CAFS SIG convened one working party to address the first subject with the following terms of reference: 1) review facilities and map requirements onto them, 2) "Database or CAFS" or "Database on CAFS", 3) training needs for users to bridge to new techniques, and 4) repair specifications to cover gaps in software. The working party interpreted the topic broadly as the data processing professional's, rather than the end-user's, view of and relationship with CAFS. This report is the result of the working party's activities. The report content for good reasons exceeds the terms of reference in their strictest sense. For example, we examine QUERYMASTER, which is deemed to be an end-user tool by ICL, from both the DP and end-user perspectives. First, this is the only interface to CAFS in the current SV201. Secondly, it is necessary for the DP department to understand the end-user's interface to CAFS. Thirdly, the other subjects have not yet been addressed by other active working parties
Survey of new vector computers: The CRAY 1S from CRAY research; the CYBER 205 from CDC and the parallel computer from ICL - architecture and programming
Problems which can arise with vector and parallel computers are discussed in a user oriented context. Emphasis is placed on the algorithms used and the programming techniques adopted. Three recently developed supercomputers are examined and typical application examples are given in CRAY FORTRAN, CYBER 205 FORTRAN and DAP (distributed array processor) FORTRAN. The systems performance is compared. The addition of parts of two N x N arrays is considered. The influence of the architecture on the algorithms and programming language is demonstrated. Numerical analysis of magnetohydrodynamic differential equations by an explicit difference method is illustrated, showing very good results for all three systems. The prognosis for supercomputer development is assessed
Lattice gauge theories dynamical fermions and parallel computation
SIGLEAvailable from British Library Document Supply Centre- DSC:D71683/87 / BLDSC - British Library Document Supply CentreGBUnited Kingdo
Recommended from our members
ICL Technical Journal 4(4): CAFS-ISP
The special issue of the ICL Technical Journal on CAFS-ISP. This closely followed the award to ICL of the Queen's Award for Technology in April, 1985. The contents include the history of the hardware and software, its status and future, perspectives from leading developers and users, and a list of related patents
Trusted execution: applications and verification
Useful security properties arise from sealing data to specific units of code. Modern processors featuring Intelâs TXT and AMDâs SVM achieve this by a process of measured and trusted execution. Only code which has the correct measurement can access the data, and this code runs in an environment trusted from observation and interference.
We discuss the history of attempts to provide security for hardware platforms, and review the literature in the field. We propose some applications which would benefit from use of trusted execution, and discuss functionality enabled by trusted execution. We present in more detail a novel variation on Diffie-Hellman key exchange which removes some reliance on random number generation.
We present a modelling language with primitives for trusted execution, along with its semantics. We characterise an attacker who has access to all the capabilities of the hardware. In order to achieve automatic analysis of systems using trusted execution without attempting to search a potentially infinite state space, we define transformations that reduce the number of times the attacker needs to use trusted execution to a pre-determined bound. Given reasonable assumptions we prove the soundness of the transformation: no secrecy attacks are lost by applying it. We then describe using the StatVerif extensions to ProVerif to model the bounded invocations of trusted execution. We show the analysis of realistic systems, for which we provide case studies
Parallel algorithm for large scale electronic structure calculations
SIGLEAvailable from British Library Document Supply Centre- DSC:DX84152 / BLDSC - British Library Document Supply CentreGBUnited Kingdo
The rhetoric of Americanisation: social construction and the British computer industry in the Post-World War II period
This research seeks to understand the process of technological development in the UK and the specific role of a ârhetoric of Americanisationâ in that process. The concept of a ârhetoric of Americanisationâ will be developed throughout the thesis through a study into the computer industry in the UK in the post-war period. Specifically, the thesis discusses the threat of America, or how actors in the network of innovation within the British computer industry perceived it as a threat and the effect that this perception had on actors operating in the networks of construction in the British computer industry. However, the reaction to this threat was not a simple one. Rather this story is marked by sectional interests and technopolitical machination attempting to capture this rhetoric of âthreatâ and âfalling behindâ. In this thesis the concept of âthreatâ and âfalling behindâ, or more simply the ârhetoric of Americanisationâ, will be explored in detail and the effect this had on the development of the British computer industry. What form did the process of capture and modification by sectional interests within government and industry take and what impact did this have on the British computer industry?
In answering these questions, the thesis will first develop a concept of a British culture of computing which acts as the surface of emergence for various ideologies of innovation within the social networks that made up the computer industry in the UK. In developing this understanding of a culture of computing, the fundamental distinction between the US and UK culture of computing will be explored. This in turn allows us to develop a concept of how Americanisation emerged as rhetorical construct. With the influence of a ârhetoric of Americanisationâ, the culture of computing in the UK began to change and the process through which government and industry interacted in the development of computing technologies also began to change. In this second half of the thesis a more nuanced and complete view of the nature of innovation in computing in the UK in the sixties will be developed. This will be achieved through an understanding of the networks of interaction between government and industry and how these networks were reconfigured through a ârhetoric of Americanisationâ. As a result of this, the thesis will arrive at a more complete view of change and development within the British computer industry and how interaction with government influences that change
Recommended from our members
A study of aspects of synchronisation and communication in certain parallel computer architectures
This paper examines methods for synchronisation and communication between tasks in highly parallel arrays of processors. The development of various methods is researched and simulation techniques are applied to specific structures, to examine their effectiveness. Two approaches to simulation are presented, in the first case a discrete event simulator is applied to task synchronisation implemented with semaphores in a close coupled environment. Secondly the concurrent programming language Occam is used to simulate a systolic configuration of processors. In this case the design is verified, through actual system construction.
Conclusions are drawn regarding the design disciplines and structure imposed by the use of these simulation techniques. A close relationship is found between the behaviour of a simulation written in Occam and the same structure constructed from multiple processors.
Further research is suggested into the subject of dataflow processors, to find suitable means for simulating such systems, prior to implementation. A type of test vehicle is proposed that would operate a dataflow processor under the control of the development system
Internal dynamics of galaxy clusters from cosmological hydrodynamical simulations
Galaxy clusters are the most massive systems in the Universe. They are usually located at the nodes of the cosmic web from which they continuously accrete matter. In this work, by combining cosmological simulations and local Universe observations, we examined several properties of the different collisionless tracers of the internal dynamics of galaxy clusters - namely Dark Matter (DM), stars, and galaxies -- to gain insights into the main processes operating in structure formation and evolution. We base our analysis on the DIANOGA zoom-in simulation set which is composed of 29 Lagrangian regions at different levels of resolution and under varying physical conditions (full hydrodynamical and/or N-body simulations).
Recent measurements (Biviano et al. 2013,2016; Capasso et al. 2019) of the pseudo-entropy (Ï^2âÏ^(2/3) , where Ï velocity dispersion and Ï density of the collisionless tracer) allowed us to study its role in the evolution of clusters as dynamical attractor (e.g., Taylor et al. 2001, Dehnen et al. 2005). Its fingerprint is the universal radial profile described by a simple power-law. We find good agreement in both normalisation and slope between observations and simulations. A significant tension is present with the galaxy member population, we discuss in detail the probable reasons behind this finding.
A large body of spectroscopic measurements (Loubser et al. 2018; Sohn et al. 2020, 2021) were able to provide a large statistical sample to study the dynamics of the Brightest Cluster Galaxy (BCG). We compare scaling relations between the BCG and cluster velocity dispersions and corresponding masses: we find in general a good agreement with observational results for the former and significant tension in the latter. We analyse the key features of the velocity dispersion profiles, as traced by stars, DM, and galaxies (Sartoris et al. 2020) and they are in excellent agreement with simulations. We also quantify the assumed impact of the IntraCluster Light (ICL) in these measurements.
Furthermore, given the existing dynamical distinction between BCG and ICL, we developed a Machine Learning (ML) method based on a supervised Random Forest to classify stars in simulated galaxy clusters in these two classes. We employ matched stellar catalogues (built from a modified version of Subfind, Dolag et al. 2010) to train and test the classifier. The input features are cluster mass, normalised particle clustercentric distance, and rest-frame velocity. The model is found to correctly identify most of the stars, while the larger errors are exhibited at the BCG outskirts, where the differences between the physical properties of the two components are less obvious. We find that our classifier provides consistent results in simulations for clusters at z<1, using different numerical resolutions and implementations of the feedback.
The last part of the project has focused on creating a ML framework to bridge the observational analysis with predictions from simulations. Measuring the ICL in observations is a difficult task which is often solved by fitting functional profiles to the BCG+ICL light profile, but often providing significantly different results. We developed a method based on convolutional neural networks to identify the ICL distribution in mock images of galaxy clusters, according to the dynamical classification we routinely perform in simulations. We construct several sets of mock images based on different observables (i.e., magnitudes, line-of-sight velocity, and velocity dispersion) that can be employed as input by the network to predict the ICL distribution in such images. This project has highlighted the dependence of the ICL build-up on the numerical resolution of the simulations, a problem which requires further investigations.Galaxy clusters are the most massive systems in the Universe. They are usually located at the nodes of the cosmic web from which they continuously accrete matter. In this work, by combining cosmological simulations and local Universe observations, we examined several properties of the different collisionless tracers of the internal dynamics of galaxy clusters - namely Dark Matter (DM), stars, and galaxies -- to gain insights into the main processes operating in structure formation and evolution. We base our analysis on the DIANOGA zoom-in simulation set which is composed of 29 Lagrangian regions at different levels of resolution and under varying physical conditions (full hydrodynamical and/or N-body simulations).
Recent measurements (Biviano et al. 2013,2016; Capasso et al. 2019) of the pseudo-entropy (Ï^2âÏ^(2/3) , where Ï velocity dispersion and Ï density of the collisionless tracer) allowed us to study its role in the evolution of clusters as dynamical attractor (e.g., Taylor et al. 2001, Dehnen et al. 2005). Its fingerprint is the universal radial profile described by a simple power-law. We find good agreement in both normalisation and slope between observations and simulations. A significant tension is present with the galaxy member population, we discuss in detail the probable reasons behind this finding.
A large body of spectroscopic measurements (Loubser et al. 2018; Sohn et al. 2020, 2021) were able to provide a large statistical sample to study the dynamics of the Brightest Cluster Galaxy (BCG). We compare scaling relations between the BCG and cluster velocity dispersions and corresponding masses: we find in general a good agreement with observational results for the former and significant tension in the latter. We analyse the key features of the velocity dispersion profiles, as traced by stars, DM, and galaxies (Sartoris et al. 2020) and they are in excellent agreement with simulations. We also quantify the assumed impact of the IntraCluster Light (ICL) in these measurements.
Furthermore, given the existing dynamical distinction between BCG and ICL, we developed a Machine Learning (ML) method based on a supervised Random Forest to classify stars in simulated galaxy clusters in these two classes. We employ matched stellar catalogues (built from a modified version of Subfind, Dolag et al. 2010) to train and test the classifier. The input features are cluster mass, normalised particle clustercentric distance, and rest-frame velocity. The model is found to correctly identify most of the stars, while the larger errors are exhibited at the BCG outskirts, where the differences between the physical properties of the two components are less obvious. We find that our classifier provides consistent results in simulations for clusters at z<1, using different numerical resolutions and implementations of the feedback.
The last part of the project has focused on creating a ML framework to bridge the observational analysis with predictions from simulations. Measuring the ICL in observations is a difficult task which is often solved by fitting functional profiles to the BCG+ICL light profile, but often providing significantly different results. We developed a method based on convolutional neural networks to identify the ICL distribution in mock images of galaxy clusters, according to the dynamical classification we routinely perform in simulations. We construct several sets of mock images based on different observables (i.e., magnitudes, line-of-sight velocity, and velocity dispersion) that can be employed as input by the network to predict the ICL distribution in such images. This project has highlighted the dependence of the ICL build-up on the numerical resolution of the simulations, a problem which requires further investigations
- âŠ