904 research outputs found

    Enhancing Client Honeypots with Grid Services and Workflows

    No full text
    Client honeypots are devices for detecting malicious servers on a network. They interact with potentially malicious servers and analyse the Web pages returned to assess whether these pages contain an attack. This type of attack is termed a 'drive-by-download'. Low-interaction client honeypots operate a signature-based approach to detecting known malicious code. High- interaction client honeypots run client applications in full operating systems that are usually hosted by a virtual machine. The operating systems are either internally or externally monitored for anomalous behaviour. In recent years there have been a growing number of client honeypot systems being developed, but there is little interoperability between systems because each has its own custom operational scripts and data formats. By creating interoperability through standard interfaces we could more easily share usage of client honeypots and the data collected. Another problem is providing a simple means of managing an installation of client honeypots. Work ows are a popular technology for allowing end-users to co-ordinate e-science experiments, so these work ow systems can potentially be utilised for client honeypot management. To formulate requirements for management we ran moderate-scale scans of the .nz domain over several months using a manual script-based approach. The main requirements were a system that is user-oriented, loosely-coupled, and integrated with Grid computing|allowing for resource sharing across organisations. Our system design uses Grid services (extensions to Web services) to wrap client honeypots, a manager component acts as a broker for user access, and workflows orchestrate the Grid services. Our prototype wraps our case study - Capture-HPC -with these services, using the Taverna workflow system, and a Web portal for user access. When evaluating our experiences we found that while our system design met our requirements, currently a Java-based application operating on our Web services provides some advantages over our Taverna approach - particularly for modifying workflows, maintainability, and dealing with failure. The Taverna workflows, however, are better suited for the data analysis phase and have some usability advantages. Workflow languages such as Taverna are still relatively immature, so improvements are likely to be made. Both of these approaches are significantly easier to manage and deploy than the previous manual script-based method

    Enhancing Client Honeypots with Grid Services and Workflows

    Get PDF
    Client honeypots are devices for detecting malicious servers on a network. They interact with potentially malicious servers and analyse the Web pages returned to assess whether these pages contain an attack. This type of attack is termed a 'drive-by-download'. Low-interaction client honeypots operate a signature-based approach to detecting known malicious code. High- interaction client honeypots run client applications in full operating systems that are usually hosted by a virtual machine. The operating systems are either internally or externally monitored for anomalous behaviour. In recent years there have been a growing number of client honeypot systems being developed, but there is little interoperability between systems because each has its own custom operational scripts and data formats. By creating interoperability through standard interfaces we could more easily share usage of client honeypots and the data collected. Another problem is providing a simple means of managing an installation of client honeypots. Work ows are a popular technology for allowing end-users to co-ordinate e-science experiments, so these work ow systems can potentially be utilised for client honeypot management. To formulate requirements for management we ran moderate-scale scans of the .nz domain over several months using a manual script-based approach. The main requirements were a system that is user-oriented, loosely-coupled, and integrated with Grid computing|allowing for resource sharing across organisations. Our system design uses Grid services (extensions to Web services) to wrap client honeypots, a manager component acts as a broker for user access, and workflows orchestrate the Grid services. Our prototype wraps our case study - Capture-HPC -with these services, using the Taverna workflow system, and a Web portal for user access. When evaluating our experiences we found that while our system design met our requirements, currently a Java-based application operating on our Web services provides some advantages over our Taverna approach - particularly for modifying workflows, maintainability, and dealing with failure. The Taverna workflows, however, are better suited for the data analysis phase and have some usability advantages. Workflow languages such as Taverna are still relatively immature, so improvements are likely to be made. Both of these approaches are significantly easier to manage and deploy than the previous manual script-based method

    Towards A Grid Infrastructure For Hydro-Meteorological Research

    Get PDF
    The Distributed Research Infrastructure for Hydro-Meteorological Study (DRIHMS) is a coordinatedaction co-funded by the European Commission. DRIHMS analyzes the main issuesthat arise when designing and setting up a pan-European Grid-based e-Infrastructure for researchactivities in the hydrologic and meteorological fields. The main outcome of the projectis represented first by a set of Grid usage patterns to support innovative hydro-meteorologicalresearch activities, and second by the implications that such patterns define for a dedicatedGrid infrastructure and the respective Grid architecture

    Research on the Architecture and its Implementation for Instrumentation and Measurement Cloud

    Get PDF
    Cloud computing has brought a new method of resource utilization and management. Nowadays some researchers are working on cloud-based instrumentation and measurement systems designated as Instrumentation and Measurement Clouds (IMCs). However, until now, no standard definition or detailed architecture with an implemented system for IMC has been presented. This paper adopts the philosophy of cloud computing and brings forward a relatively standard definition and a novel architecture for IMC. The architecture inherits many key features of cloud computing, such as service provision on demand, scalability and so on, for remote Instrumentation and Measurement (IM) resource utilization and management. In the architecture, instruments and sensors are virtualized into abstracted resources, and commonly used IM functions are wrapped into services. Users can use these resources and services on demand remotely. Platforms implemented under such architecture can reduce the investment for building IM systems greatly, enable remote sharing of IM resources, increase utilization efficiency of various resources, and facilitate processing and analyzing of Big Data from instruments and sensors. Practical systems with a typical application are implemented upon the architecture. Results demonstrate that the novel IMC architecture can provide a new effective and efficient framework for establishing IM systems

    Élaboration d'un intergiciel pour relier les instruments aux grids

    Get PDF
    Les logiciels Grid sont en train de devenir une partie intégrale de la science électronique puisque la science moderne a besoin d'une grande capacité de calcul et une grande base de données d'information. Afin d'avoir des logiciels Grid capables de s'intégrer avec la science d'aujourd'hui, il faut que les instruments de mesure soient accessibles et représentés grâce à des intergiciels Grid de façon à ce qu'ils fassent partie de la Grid. Ce mémoire présente un résumé de la technologie des Grids, la conception du modèle et l'implémentation initiale de l'intergiciel appelé Grid Resource Instrument Model (GRIM) bâti à l'aide du WSRF pour les instruments et capteurs et inspiré par les standards IEEE1451, SensorML et TML. Le résultat de cette recherche est un intergiciel qui peut être utilisé par des applications Grid à des fins telles la planification et partage de laboratoires, le contrôle à distance d'instruments et la surveillance de capteurs

    Fiducial Reference Measurements for Satellite Ocean Colour (FRM4SOC)

    Get PDF
    Earth observation data can help us understand and address some of the grand challenges and threats facing us today as a species and as a planet, for example climate change and its impacts and sustainable use of the Earth’s resources. However, in order to have confidence in earth observation data, measurements made at the surface of the Earth, with the intention of providing verification or validation of satellite-mounted sensor measurements, should be trustworthy and at least of the same high quality as those taken with the satellite sensors themselves. Metrology tells us that in order to be trustworthy, measurements should include an unbroken chain of SI-traceable calibrations and comparisons and full uncertainty budgets for each of the in situ sensors. Until now, this has not been the case for most satellite validation measurements. Therefore, within this context, the European Space Agency (ESA) funded a series of Fiducial Reference Measurements (FRM) projects targeting the validation of satellite data products of the atmosphere, land, and ocean, and setting the framework, standards, and protocols for future satellite validation efforts. The FRM4SOC project was structured to provide this support for evaluating and improving the state of the art in ocean colour radiometry (OCR) and satellite ocean colour validation through a series of comparisons under the auspices of the Committee on Earth Observation Satellites (CEOS). This followed the recommendations from the International Ocean Colour Coordinating Group’s white paper and supports the CEOS ocean colour virtual constellation. The main objective was to establish and maintain SI traceable ground-based FRM for satellite ocean colour and thus make a fundamental contribution to the European system for monitoring the Earth (Copernicus). This paper outlines the FRM4SOC project structure, objectives and methodology and highlights the main results and achievements of the project: (1) An international SI-traceable comparison of irradiance and radiance sources used for OCR calibration that set measurement, calibration and uncertainty estimation protocols and indicated good agreement between the participating calibration laboratories from around the world; (2) An international SI-traceable laboratory and outdoor comparison of radiometers used for satellite ocean colour validation that set OCR calibration and comparison protocols; (3) A major review and update to the protocols for taking irradiance and radiance field measurements for satellite ocean colour validation, with particular focus on aspects of data acquisition and processing that must be considered in the estimation of measurement uncertainty and guidelines for good practice; (4) A technical comparison of the main radiometers used globally for satellite ocean colour validation bringing radiometer manufacturers together around the same table for the first time to discuss instrument characterisation and its documentation, as needed for measurement uncertainty estimation; (5) Two major international side-by-side field intercomparisons of multiple ocean colour radiometers, one on the Atlantic Meridional Transect (AMT) oceanographic cruise, and the other on the Acqua Alta oceanographic tower in the Gulf of Venice; (6) Impact and promotion of FRM within the ocean colour community, including a scientific road map for the FRM-based future of satellite ocean colour validation and vicarious calibration (based on the findings of the FRM4SOC project, the consensus from two major international FRM4SOC workshops and previous literature, including the IOCCG white paper on in situ ocean colour radiometry)

    Arquitectura orientada a serviços REST para laboratórios remotos

    Get PDF
    Dissertação para obtenção do grau de Mestre em Engenharia Electrotécnica e de ComputadoresA principal contribuição apresentada nesta dissertação é uma arquitectura orientada a serviços REST, que permite a implementação de um laboratório remoto com acesso via Internet, utilizando um cliente leve. O principal objectivo foi o desenvolvimento de um cliente leve baseado em tecnologias AJAX (Asyncronous Javascript and XML) executado em navegador (browser) de Internet, com funcionalidades de comunicação com uma aplicação que corre num servidor. No lado do servidor foi utilizada uma aplicação de instrumentação virtual suportada pelo Labview (versão 8.6), que permite controlar processos através de placas de aquisição de dados. Ambas as aplicações correm no servidor e aproveitam as potencialidades das arquitecturas orientadas a serviços, especificamente usando Serviços Web baseados na arquitectura REST.A utilização do cliente leve proposto permite ultrapassar a necessidade de ter instalado um software do tipo “Labview Pluaggin” no navegador do cliente. O paradigma da arquitectura orientada a serviços permite que cada operação do laboratório remoto possa ser ela própria um serviço, o que permite a criação de serviços distribuídos em sistemas de controlo

    The Future of Information Sciences : INFuture2007 : Digital Information and Heritage

    Get PDF
    corecore