652 research outputs found
The On-Site Analysis of the Cherenkov Telescope Array
The Cherenkov Telescope Array (CTA) observatory will be one of the largest
ground-based very high-energy gamma-ray observatories. The On-Site Analysis
will be the first CTA scientific analysis of data acquired from the array of
telescopes, in both northern and southern sites. The On-Site Analysis will have
two pipelines: the Level-A pipeline (also known as Real-Time Analysis, RTA) and
the level-B one. The RTA performs data quality monitoring and must be able to
issue automated alerts on variable and transient astrophysical sources within
30 seconds from the last acquired Cherenkov event that contributes to the
alert, with a sensitivity not worse than the one achieved by the final pipeline
by more than a factor of 3. The Level-B Analysis has a better sensitivity (not
be worse than the final one by a factor of 2) and the results should be
available within 10 hours from the acquisition of the data: for this reason
this analysis could be performed at the end of an observation or next morning.
The latency (in particular for the RTA) and the sensitivity requirements are
challenging because of the large data rate, a few GByte/s. The remote
connection to the CTA candidate site with a rather limited network bandwidth
makes the issue of the exported data size extremely critical and prevents any
kind of processing in real-time of the data outside the site of the telescopes.
For these reasons the analysis will be performed on-site with infrastructures
co-located with the telescopes, with limited electrical power availability and
with a reduced possibility of human intervention. This means, for example, that
the on-site hardware infrastructure should have low-power consumption. A
substantial effort towards the optimization of high-throughput computing
service is envisioned to provide hardware and software solutions with
high-throughput, low-power consumption at a low-cost.Comment: In Proceedings of the 34th International Cosmic Ray Conference
(ICRC2015), The Hague, The Netherlands. All CTA contributions at
arXiv:1508.0589
TechNews digests: Jan - Nov 2009
TechNews is a technology, news and analysis service aimed at anyone in the education sector keen to stay informed about technology developments, trends and issues. TechNews focuses on emerging technologies and other technology news. TechNews service : digests september 2004 till May 2010 Analysis pieces and News combined publish every 2 to 3 month
Composable architecture for rack scale big data computing
The rapid growth of cloud computing, both in terms of the spectrum and volume of cloud workloads, necessitate re-visiting the traditional rack-mountable servers based datacenter design. Next generation datacenters need to offer enhanced support for: (i) fast changing system configuration requirements due to workload constraints, (ii) timely adoption of emerging hardware technologies, and (iii) maximal sharing of systems and subsystems in order to lower costs. Disaggregated datacenters, constructed as a collection of individual resources such as CPU, memory, disks etc., and composed into workload execution units on demand, are an interesting new trend that can address the above challenges. In this paper, we demonstrated the feasibility of composable systems through building a rack scale composable system prototype using PCIe switch. Through empirical approaches, we develop assessment of the opportunities and challenges for leveraging the composable architecture for rack scale cloud datacenters with a focus on big data and NoSQL workloads. In particular, we compare and contrast the programming models that can be used to access the composable resources, and developed the implications for the network and resource provisioning and management for rack scale architecture
On interconnecting and orchestrating components in disaggregated data centers:The dReDBox project vision
Computing systems servers-low-or high-end ones have been traditionally designed and built using a main-board and its hardware components as a 'hard' monolithic building block; this formed the base unit on which the system hardware and software stack design build upon. This hard deployment and management border on compute, memory, network and storage resources is either fixed or quite limited in expandability during design time and in practice remains so throughout machine lifetime as subsystem upgrades are seldomely employed. The impact of this rigidity has well known ramifications in terms of lower system resource utilization, costly upgrade cycles and degraded energy proportionality. In the dReDBox project we take on the challenge of breaking the server boundaries through materialization of the concept of disaggregation. The basic idea of the dReDBox architecture is to use a core of high-speed, low-latency opto-electronic fabric that will bring physically distant components more closely in terms of latency and bandwidth. We envision a powerful software-defined control plane that will match the flexibility of the system to the resource needs of the applications (or VMs) running in the system. Together the hardware, interconnect, and software architectures will enable the creation of a modular, vertically-integrated system that will form a datacenter-in-a-box
- …