111 research outputs found
Examining the Impact of Data Layout on Tape on Data Recall Performance for ATLAS
Increases in data volumes are forcing high-energy and nuclear physics experiments to store more frequently accessed data on tape. Extracting the maximum performance from tape drives is critical to make this viable from a data availability and system cost standpoint. The nature of data ingest and retrieval in an experimental physics environment make achieving high access performance difficult given the inherent limitations of magnetic tape. Tailoring the layout of data on tape is one key to improving read performance. This paper highlights the work in progress to characterize ATLAS data ingested in the tape system, understand how data layout, i.e. file co-location on tape and file distribution over tapes, affect read performance and how optimal data layout might be achieved in a production environment
Microfabrication of Three-Dimensional Structures in Polymer and Glass by Femtosecond Pulses
We report three-dimensional laser microfabrication, which enables
microstructuring of materials on the scale of 0.2-1 micrometers. The two
different types of microfabrication demonstrated and discussed in this work are
based on holographic recording, and light-induced damage in transparent
dielectric materials. Both techniques use nonlinear optical excitation of
materials by ultrashort laser pulses (duration < 1 ps).Comment: This is a proceedings paper of bi-lateral Conf. (Republics of China &
Lithuania) on Optoelectronics and Magnetic Materials, Taipei, May 25-26,
2002.
Visualizing the Periods of Stock Prices Using Non-Harmonic Analysis of the NASDAQ Composite Index Since 1985
Abstract: The prediction of stock prices is studied extensively, because of the demand from private investors and financial institutions. However, long-term prediction is difficult due to the large number of factors that affect the real market. Previous research has focused on the fluctuation patterns and fluctuation periodicity of stock prices. We have likewise focused on the periodicity of stock prices. We have used a new high-resolution frequency analysis (non-harmonic analysis) method can solve the previous problem of the frequency resolution being low. As a consequence, we have succeeded in visualizing the various periodicities of stock prices. The periodicity fluctuates gently in many periods, but we have confirmed that it fluctuated violently in periods when a sudden event occurred. We expect that this experimental result in combination with previous research will help increase predictive accuracy and will aid long-term prediction
Finalizing Transition to the New Data Center at BNL
Computational science, data management and analysis have been key factors in the success of Brookhaven National Laboratory's scientific programs at the Relativistic Heavy Ion Collider (RHIC), the National Synchrotron Light Source (NSLS-II), the Center for Functional Nanomaterials (CFN), and in biological, atmospheric, and energy systems science, Lattice Quantum Chromodynamics (LQCD) and Materials Science, as well as our participation in international research collaborations, such as the ATLAS Experiment at Europe's Large Hadron Collider (LHC) at CERN (Switzerland) and the Belle II Experiment at KEK (Japan). The construction of a new data center is an acknowledgement of the increasing demand for computing and storage services at BNL in the near term and enable the Lab to address the needs of the future experiments at the High-Luminosity LHC at CERN and the Electron-Ion Collider (EIC) at BNL in the long term. The Computing Facility Revitalization (CFR) project is aimed at repurposing the former National Synchrotron Light Source (NSLS-I) building as the new data center for BNL. The construction of the new data center was finished in 2021Q3, and it was delivered for production in early FY2022 for all collaborations supported by the Scientific Data and Computing Center (SDCC), including STAR, PHENIX and sPHENIX experiments at RHIC collider at BNL, the Belle II Experiment at KEK (Japan), and the Computational Science Initiative at BNL (CSI). This paper highlights the key mechanical, electrical, and networking components of the new data center in its final configuration as used in production since 2021Q4 and gives an overview for the extension of the central network systems into the new data center and the migration of a significant portion of IT load and services from the old data center to the new data center carried out in 20212023, with expected completion of the main phase of the gradual IT equipment replacement and migration from the old data center into the new one set to the end of FY2023 (Sep 30, 2023)
Financial Case Study on the Use of Cloud Resources in HEP Computing
An all-inclusive analysis of costs for on-premises and public cloudbased solutions to handle the bulk of HEP computing requirements shows that dedicated on-premises deployments of compute and storage resources are still the most cost-effective. Since the advent of public cloud services, the HEP community has engaged in multiple proofs of concept to study the technical viability of using cloud resources; however, the financial viability of using cloud resources for HEP computing and storage is of greater importance. We present the results of a study comparing the cost of providing computing resources in a public cloud and a comprehensive estimate for the cost of an on-premises solution for HEP computing. Like previous studies, the fundamental conclusion is that for the bulk of HEP computing needs, on premises is significantly more cost effective than public clouds
AVal: an Extensible Attribute-Oriented Programming Validator for Java
International audienceAttribute Oriented Programming (@OP ) permits programmers to extend the semantics of a base program by annotating it with attributes that are related to a set of concerns. Examples of this are applications that rely on XDoclet (such as Hibernate) or, with the release of Java5's annotations, EJB3. The set of attributes that implements a concern defines a Domain Specific Language, and as such, imposes syntactic and semantic rules on the way that attributes are included in the program or even on the program itself. We propose a framework for the definition and checking of these rules for @OP that uses Java5 annotations. We define an extensible set of meta-annotations to allow the validation of @OP programs, as well as the means to extend them using a compile-time model of the program's source code. We show the usefulness of the approach by presenting two examples of its use: an @OP extension for the Fractal component model called Fraclet, and the JSR 181 for web services definitio
Exploring Future Storage Options for ATLAS at the BNL/SDCC facility
The ATLAS experiment is expected to deliver an unprecedented amount of scientific data in the High Luminosity(HL-LHC) era. As the demand for disk storage capacity in ATLAS continues to rise steadily, the BNL Scientific Data and Computing Center (SDCC) faces challenges in terms of cost implications for maintaining multiple disk copies and adapting to the coming ATLAS storage requirements. To address these challenges, the SDCC Storage team has undertaken a thorough analysis of the ATLAS experiment’s requirements, matching them to suitable storage options and strategies, and has explored alternatives to enhance or replace the current storage solution.
This paper aims to present the main challenges encountered while supporting big data experiments such as ATLAS. We describe the experiment’s specific requirements and priorities, particularly focusing on the critical storage system characteristics of the high-luminosity run and how the key storage components provided by the Storage team work together: the dCache disk storage system; its archival back-end, HPSS; and its OS-level backend Storage. Specifically, we investigate a novel approach to integrate Lustre and XRootD. In this setup, Lustre serves as backend storage and XRootD acts as an access layer frontend, supporting various grid access protocols. Additionally, we also describe the validation and commissioning tests, including the performance comparison between dCache and XRootd. Furthermore, we provide a performance and cost analysis comparing OpenZFS and LINUX MD RAID, evaluate different storage software stacks, and showcase stress tests conducted to validate Third Party Copy (TPC) functionality
Functional analysis of HOXD9 in human gliomas and glioma cancer stem cells
<p>Abstract</p> <p>Background</p> <p><it>HOX </it>genes encode a family of homeodomain-containing transcription factors involved in the determination of cell fate and identity during embryonic development. They also behave as oncogenes in some malignancies.</p> <p>Results</p> <p>In this study, we found high expression of the <it>HOXD9 </it>gene transcript in glioma cell lines and human glioma tissues by quantitative real-time PCR. Using immunohistochemistry, we observed HOXD9 protein expression in human brain tumor tissues, including astrocytomas and glioblastomas. To investigate the role of <it>HOXD9 </it>in gliomas, we silenced its expression in the glioma cell line U87 using <it>HOXD9</it>-specific siRNA, and observed decreased cell proliferation, cell cycle arrest, and induction of apoptosis. It was suggested that <it>HOXD9 </it>contributes to both cell proliferation and/or cell survival. The <it>HOXD9 </it>gene was highly expressed in a side population (SP) of SK-MG-1 cells that was previously identified as an enriched-cell fraction of glioma cancer stem-like cells. <it>HOXD9 </it>siRNA treatment of SK-MG-1 SP cells resulted in reduced cell proliferation. Finally, we cultured human glioma cancer stem cells (GCSCs) from patient specimens found with high expression of <it>HOXD9 </it>in GCSCs compared with normal astrocyte cells and neural stem/progenitor cells (NSPCs).</p> <p>Conclusions</p> <p>Our results suggest that <it>HOXD9 </it>may be a novel marker of GCSCs and cell proliferation and/or survival factor in gliomas and glioma cancer stem-like cells, and a potential therapeutic target.</p
- …