544 research outputs found

    Operational Numerical Weather Prediction systems based on Linux cluster architectures

    Get PDF
    The progress in weather forecast and atmospheric science has been always closely linked to the improvement of computing technology. In order to have more accurate weather forecasts and climate predictions, more powerful computing resources are needed, in addition to more complex and better-performing numerical models. To overcome such a large computing request, powerful workstations or massive parallel systems have been used. In the last few years, parallel architectures, based on the Linux operating system, have been introduced and became popular, representing real“high performance–low cost” systems. In this work the Linux cluster experience achieved at the Laboratory for Meteorology and Environmental Analysis (LaMMA-CNR-IBIMET) is described and tips and performances analysed

    FAST-NMR - Functional Annotation Screening Technology Using NMR Spectroscopy

    Get PDF
    An abundance of protein structures emerging from structural genomics and the Protein Structure Initiative (PSI) are not amenable to ready functional assignment because of a lack of sequence and structural homology to proteins of known function. We describe a high-throughput NMR methodology (FAST-NMR) to annotate the biological function of novel proteins through the structural and sequence analysis of protein-ligand interactions. This is based on basic tenets of biochemistry where proteins with similar functions will have similar active sites and exhibit similar ligand binding interactions, despite global differences in sequence and structure. Protein-ligand interactions are determined through a tiered NMR screen using a library composed of compounds with known biological activity. A rapid co-structure is determined by combining the experimental identification of the ligand-binding site from NMR chemical shift perturbations with the proteinligand docking program AutoDock. Our CPASS (Comparison of Protein Active Site Structures) software and database is then used to compare this active site with proteins of known function. The methodology is demonstrated using unannotated protein SAV1430 from Staphylococcus aureus

    Online Novelty Detection System: One-Class Classification of Systemic Operation

    Get PDF
    Presented is an Online Novelty Detection System (ONDS) that uses Gaussian Mixture Models (GMMs) and one-class classification techniques to identify novel information from multivariate times-series data. Multiple data preprocessing methods are explored and features vectors formed from frequency components obtained by the Fast Fourier Transform (FFT) and Welch\u27s method of estimating Power Spectral Density (PSD). The number of features are reduced by using bandpower schemes and Principal Component Analysis (PCA). The Expectation Maximization (EM) algorithm is used to learn parameters for GMMs on feature vectors collected from only normal operational conditions. One-class classification is achieved by thresholding likelihood values relative to statistical limits. The ONDS is applied to two different applications from different application domains. The first application uses the ONDS to evaluate systemic health of Radio Frequency (RF) power generators. Four different models of RF power generators and over 400 unique units are tested, and the average robust true positive rate of 94.76% is achieved and the best specificity reported as 86.56%. The second application uses the ONDS to identify novel events from equine motion data and assess equine distress. The ONDS correctly identifies target behaviors as novel events with 97.5% accuracy. Algorithm implementation for both methods is evaluated within embedded systems and demonstrates execution times appropriate for online use

    A proactive fault tolerance framework for high performance computing (HPC) systems in the cloud

    Get PDF
    High Performance Computing (HPC) systems have been widely used by scientists and researchers in both industry and university laboratories to solve advanced computation problems. Most advanced computation problems are either data-intensive or computation-intensive. They may take hours, days or even weeks to complete execution. For example, some of the traditional HPC systems computations run on 100,000 processors for weeks. Consequently traditional HPC systems often require huge capital investments. As a result, scientists and researchers sometimes have to wait in long queues to access shared, expensive HPC systems. Cloud computing, on the other hand, offers new computing paradigms, capacity, and flexible solutions for both business and HPC applications. Some of the computation-intensive applications that are usually executed in traditional HPC systems can now be executed in the cloud. Cloud computing price model eliminates huge capital investments. However, even for cloud-based HPC systems, fault tolerance is still an issue of growing concern. The large number of virtual machines and electronic components, as well as software complexity and overall system reliability, availability and serviceability (RAS), are factors with which HPC systems in the cloud must contend. The reactive fault tolerance approach of checkpoint/restart, which is commonly used in HPC systems, does not scale well in the cloud due to resource sharing and distributed systems networks. Hence, the need for reliable fault tolerant HPC systems is even greater in a cloud environment. In this thesis we present a proactive fault tolerance approach to HPC systems in the cloud to reduce the wall-clock execution time, as well as dollar cost, in the presence of hardware failure. We have developed a generic fault tolerance algorithm for HPC systems in the cloud. We have further developed a cost model for executing computation-intensive applications on HPC systems in the cloud. Our experimental results obtained from a real cloud execution environment show that the wall-clock execution time and cost of running computation-intensive applications in the cloud can be considerably reduced compared to checkpoint and redundancy techniques used in traditional HPC systems

    FAST-NMR - Functional Annotation Screening Technology Using NMR Spectroscopy

    Get PDF
    An abundance of protein structures emerging from structural genomics and the Protein Structure Initiative (PSI) are not amenable to ready functional assignment because of a lack of sequence and structural homology to proteins of known function. We describe a high-throughput NMR methodology (FAST-NMR) to annotate the biological function of novel proteins through the structural and sequence analysis of protein-ligand interactions. This is based on basic tenets of biochemistry where proteins with similar functions will have similar active sites and exhibit similar ligand binding interactions, despite global differences in sequence and structure. Protein-ligand interactions are determined through a tiered NMR screen using a library composed of compounds with known biological activity. A rapid co-structure is determined by combining the experimental identification of the ligand-binding site from NMR chemical shift perturbations with the proteinligand docking program AutoDock. Our CPASS (Comparison of Protein Active Site Structures) software and database is then used to compare this active site with proteins of known function. The methodology is demonstrated using unannotated protein SAV1430 from Staphylococcus aureus

    High resolution solar observations in the context of space weather prediction

    Get PDF
    Space weather has a great impact on the Earth and human life. It is important to study and monitor active regions on the solar surface and ultimately to predict space weather based on the Sun\u27s activity. In this study, a system that uses the full power of speckle masking imaging by parallel processing to obtain high-spatial resolution images of the solar surface in near real-time has been developed and built. The application of this system greatly improves the ability to monitor the evolution of solar active regions and to predict the adverse effects of space weather. The data obtained by this system have also been used to study fine structures on the solar surface and their effects on the upper solar atmosphere. A solar active region has been studied using high resolution data obtained by speckle masking imaging. Evolution of a pore in an active region presented. Formation of a rudimentary penumbra is studied. The effects of the change of the magnetic fields on the upper level atmosphere is discussed. Coronal Mass Ejections (CMEs) have a great impact on space weather. To study the relationship between CMEs and filament disappearance, a list of 431 filament and prominence disappearance events has been compiled. Comparison of this list with CME data obtained by satellite has shown that most filament disappearances seem to have no corresponding CME events. Even for the limb events, only thirty percent of filament disappearances are associated with CMEs. A CME event that was observed on March 20, 2000 has been studied in detail. This event did not show the three-parts structure of typical CMEs. The kinematical and morphological properties of this event were examined

    Internet Predictions

    Get PDF
    More than a dozen leading experts give their opinions on where the Internet is headed and where it will be in the next decade in terms of technology, policy, and applications. They cover topics ranging from the Internet of Things to climate change to the digital storage of the future. A summary of the articles is available in the Web extras section

    An adaptive grid refinement strategy for the simulation of negative streamers

    Get PDF
    The evolution of negative streamers during electric breakdown of a non-attaching gas can be described by a two-fluid model for electrons and positive ions. It consists of continuity equations for the charged particles including drift, diffusion and reaction in the local electric field, coupled to the Poisson equation for the electric potential. The model generates field enhancement and steep propagating ionization fronts at the tip of growing ionized filaments. An adaptive grid refinement method for the simulation of these structures is presented. It uses finite volume spatial discretizations and explicit time stepping, which allows the decoupling of the grids for the continuity equations from those for the Poisson equation. Standard refinement methods in which the refinement criterion is based on local error monitors fail due to the pulled character of the streamer front that propagates into a linearly unstable state. We present a refinement method which deals with all these features. Tests on one-dimensional streamer fronts as well as on three-dimensional streamers with cylindrical symmetry (hence effectively 2D for numerical purposes) are carried out successfully. Results on fine grids are presented, they show that such an adaptive grid method is needed to capture the streamer characteristics well. This refinement strategy enables us to adequately compute negative streamers in pure gases in the parameter regime where a physical instability appears: branching streamers.Comment: 46 pages, 19 figures, to appear in J. Comp. Phy

    Jahresbericht 2013 zur kooperativen DV-Versorgung

    Get PDF
    :Vorwort ÜBERSICHT DER INSERENTEN 12 TEIL I ZUR ARBEIT DER DV-KOMMISSION 15 ZUR ARBEIT DES ERWEITERTEN IT-LENKUNGSAUSSCHUSSES 16 ZUR ARBEIT DES IT-LENKUNGSAUSSCHUSSES 17 ZUR ARBEIT DES WISSENSCHAFTLICHEN BEIRATES DES ZIH 17 TEIL II 1 DAS ZENTRUM FÜR INFORMATIONSDIENSTE UND HOCHLEISTUNGSRECHNEN (ZIH) 21 1.1 AUFGABEN 21 1.2 ZAHLEN UND FAKTEN (REPRÄSENTATIVE AUSWAHL) 21 1.3 HAUSHALT 22 1.4 STRUKTUR / PERSONAL 23 1.5 STANDORT 24 1.6 GREMIENARBEIT 25 2 KOMMUNIKATIONSINFRASTRUKTUR 27 2.1 NUTZUNGSÜBERSICHT NETZDIENSTE 27 2.2 NETZWERKINFRASTRUKTUR 27 2.3 KOMMUNIKATIONS- UND INFORMATIONSDIENSTE 37 3 ZENTRALE DIENSTANGEBOTE UND SERVER 47 3.1 SERVICE DESK 47 3.2 TROUBLE TICKET SYSTEM (OTRS) 48 3.3 NUTZERMANAGEMENT 49 3.4 LOGIN-SERVICE 50 3.5 BEREITSTELLUNG VON VIRTUELLEN SERVERN 51 3.6 STORAGE-MANAGEMENT 51 3.7 LIZENZ-SERVICE 57 3.8 PERIPHERIE-SERVICE 58 3.9 PC-POOLS 58 3.10 SECURITY 59 3.11 DRESDEN SCIENCE CALENDAR 60 4 SERVICELEISTUNGEN FÜR DEZENTRALE DV-SYSTEME 63 4.1 ALLGEMEINES 63 4.2 INVESTBERATUNG 63 4.3 PC SUPPORT 63 4.4 MICROSOFT WINDOWS-SUPPORT 64 4.5 ZENTRALE SOFTWARE-BESCHAFFUNG FÜR DIE TU DRESDEN 70 5 HOCHLEISTUNGSRECHNEN 73 5.1 HOCHLEISTUNGSRECHNER/SPEICHERKOMPLEX (HRSK-II) 73 5.2 NUTZUNGSÜBERSICHT DER HPC-SERVER 80 5.3 SPEZIALRESSOURCEN 81 5.4 GRID-RESSOURCEN 82 5.5 ANWENDUNGSSOFTWARE 84 5.6 VISUALISIERUNG 85 5.7 PARALLELE PROGRAMMIERWERKZEUGE 86 6 WISSENSCHAFTLICHE PROJEKTE, KOOPERATIONEN 89 6.1 „KOMPETENZZENTRUM FÜR VIDEOKONFERENZDIENSTE“ (VCCIV) 89 6.2 SKALIERBARE SOFTWARE-WERKZEUGE ZUR UNTERSTÜTZUNG DER ANWENDUNGSOPTIMIERUNG AUF HPC-SYSTEMEN 94 6.3 LEISTUNGS- UND ENERGIEEFFIZIENZ-ANALYSE FÜR INNOVATIVE RECHNERARCHITEKTUREN 96 6.4 DATENINTENSIVES RECHNEN, VERTEILTES RECHNEN UND CLOUD COMPUTING 100 6.5 DATENANALYSE, METHODEN UND MODELLIERUNG IN DEN LIFE SCIENCES 103 6.6 PARALLELE PROGRAMMIERUNG, ALGORITHMEN UND METHODEN 106 6.7 KOOPERATIONEN 111 7 AUSBILDUNGSBETRIEB UND PRAKTIKA 113 7.1 AUSBILDUNG ZUM FACHINFORMATIKER / FACHRICHTUNG ANWENDUNGSENTWICKLUNG 113 7.2 PRAKTIKA 114 8 AUS- UND WEITERBILDUNGSVERANSTALTUNGEN 115 9 VERANSTALTUNGEN 117 10 PUBLIKATIONEN 118 TEIL III BEREICH MATHEMATIK UND NATURWISSENSCHAFTEN 125 BEREICH GEISTES UND SOZIALWISSENSCHAFTEN 151 BEREICH INGENIEURWISSENSCHAFTEN 177 BEREICH BAU UND UMWELT 189 BEREICH MEDIZIN 223 ZENTRALE UNIVERSITÄTSVERWALTUNG 23

    Exploring distributed computing tools through data mining tasks

    Get PDF
    Harnessing idle PCs CPU cycles, storage space and other resources of networked computers to collaborative are mainly fixated on for all major grid computing research projects. Most of the university computers labs are occupied with the high puissant desktop PC nowadays. It is plausible to notice that most of the time machines are lying idle or wasting their computing power without utilizing in felicitous ways. However, for intricate quandaries and for analyzing astronomically immense amounts of data, sizably voluminous computational resources are required. For such quandaries, one may run the analysis algorithms in very puissant and expensive computers, which reduces the number of users that can afford such data analysis tasks. Instead of utilizing single expensive machines, distributed computing systems, offers the possibility of utilizing a set of much less expensive machines to do the same task. BOINC and Condor projects have been prosperously utilized for solving authentic scientific research works around the world at a low cost. In this work the main goal is to explore both distributed computing to implement, Condor and BOINC, and utilize their potency to harness the ideal PCs resources for the academic researchers to utilize in their research work. In this thesis, Data mining tasks have been performed in implementation of several machine learning algorithms on the distributed computing environment.Tirar partido dos recursos de CPU disponíveis, do espaço de armazenamento, e de outros recursos de computadores interligados em rede, de modo a que possam trabalhar conjuntamente, são características comuns a todos os grandes projetos de investigação em grid computing. Hoje em dia, a maioria dos laboratórios informáticos dos centros de investigação das instituições de ensino superior encontra-se equipada com poderosos computadores. Constata-se que, na maioria do tempo, estas máquinas não estão a utilizar o seu poder de processamento ou, pelo menos, não o utilizam na sua plenitude. No entanto, para problemas complexos e para a análise de grandes quantidades de dados, são necessários vastos recursos computacionais. Em tais situações, os algoritmos de análise requerem computadores muito potentes e caros, o que reduz o número de utilizadores que podem realizar essas tarefas de análise de dados. Em vez de se utilizarem máquinas individuais dispendiosas, os sistemas de computação distribuída oferecem a possibilidade de se utilizar um conjunto de máquinas muito menos onerosas que realizam a mesma tarefa. Os projectos BOINC e Condor têm sido utilizados com sucesso em trabalhos de investigação científica, em todo o mundo, com um custo reduzido. Neste trabalho, o objetivo principal é explorar ambas as ferramentas de computação distribuída, Condor e BOINC, para que se possa aproveitar os recursos computacionais disponíveis dos computadores, utilizando-os de modo a que os investigadores possam tirar partido deles nos seus trabalhos de investigação. Nesta dissertação, são realizadas tarefas de data mining com diferentes algoritmos de aprendizagem automática, num ambiente de computação distribuída
    corecore