15 research outputs found

    Mass Supply to Galactic Center due to Nested Bars in the Galaxy

    Get PDF
    We investigate rapid mass supply process by nested bars in the Galaxy by numerical simulation. We simulate gas flow in the whole galaxy disk with nested bars, which are the outer bar and the inner bar, especially with highly spatial resolution in the galactic central region. We assume two cases of inner bar size which are a smaller one and a larger one than the radius of the 200 pc gas ring which is corresponds to the Central Molecular Zone. From our numerical results, in the large size bar cases, the inner bars with large elongation induce sufficient mass inflow and destroy the 200 pc gas ring. On the other hand, in the small size bar cases, the inner bars with large elongation induce large mass inflow and do not destroy the 200 pc gas ring. This mass inflow is caused by straight shocks excited by the inner bar. In this case, nuclear gas disks of ~ 15 pc radius are formed. The nuclear gas disks are self-gravitationally unstable and we expect formation of compact star clusters under strong tidal force in the nuclear gas disks. We discuss evolution of the nuclear gas disk.Comment: 36 pages, 11 figures. Submitted to Ap

    Evolution of a Nuclear Gas Disk and Gas Supply to the Galactic Center

    No full text
    Gas supply to galactic centers is important for activities in galactic centers and growth of super-massive black holes (SMBHs). To elucidate relation between gas supply processes to galactic centers and activities in galactic centers is also important for understanding the evolution of galaxies. We have studied the relation between gas supply from the Galactic disk to the central 60 pc region in our Galaxy and stellar and gas distribution in this region. A part of our study[1] was published by support of NAOJ. Here, we report the results of [1]. In the central 60 pc, there are three young massive star clusters (Central cluster, Arches cluster, and Quintuplet cluster). Each of the clusters contains a hundred of massive stars and masses of the clusters is estimated to be ~10 4 M�, if we assume the Salpeter-type IMF[2]. Th

    Accelerated FDPS: Algorithms to use accelerators with FDPS

    Get PDF
    We describe algorithms implemented in FDPS (Framework for Developing Particle Simulators) to make efficient use of accelerator hardware such as GPGPUs (general-purpose computing on graphics processing units). We have developed FDPS to make it possible for researchers to develop their own high-performance parallel particle-based simulation programs without spending large amounts of time on parallelization and performance tuning. FDPS provides a high-performance implementation of parallel algorithms for particle-based simulations in a "generic" form, so that researchers can define their own particle data structure and interparticle interaction functions. FDPS compiled with user-supplied data types and interaction functions provides all the necessary functions for parallelization, and researchers can thus write their programs as though they are writing simple non-parallel code. It has previously been possible to use accelerators with FDPS by writing an interaction function that uses the accelerator. However, the efficiency was limited by the latency and bandwidth of communication between the CPU and the accelerator, and also by the mismatch between the available degree of parallelism of the interaction function and that of the hardware parallelism. We have modified the interface of the user-provided interaction functions so that accelerators are more efficiently used. We also implemented new techniques which reduce the amount of work on the CPU side and the amount of communication between CPU and accelerators. We have measured the performance of N-body simulations on a system with an NVIDIA Volta GPGPU using FDPS and the achieved performance is around 27% of the theoretical peak limit. We have constructed a detailed performance model, and found that the current implementation can achieve good performance on systems with much smaller memory and communication bandwidth. Thus, our implementation will be applicable to future generations of accelerator system
    corecore