522,549 research outputs found

    Cloud Security : A Review of Recent Threats and Solution Models

    Get PDF
    The most significant barrier to the wide adoption of cloud services has been attributed to perceived cloud insecurity (Smitha, Anna and Dan, 2012). In an attempt to review this subject, this paper will explore some of the major security threats to the cloud and the security models employed in tackling them. Access control violations, message integrity violations, data leakages, inability to guarantee complete data deletion, code injection, malwares and lack of expertise in cloud technology rank the major threats. The European Union invested €3m in City University London to research into the certification of Cloud security services. This and more recent developments are significant in addressing increasing public concerns regarding the confidentiality, integrity and privacy of data held in cloud environments. Some of the current cloud security models adopted in addressing cloud security threats were – Encryption of all data at storage and during transmission. The Cisco IronPort S-Series web security appliance was among security solutions to solve cloud access control issues. 2-factor Authentication with RSA SecurID and close monitoring appeared to be the most popular solutions to authentication and access control issues in the cloud. Database Active Monitoring, File Active Monitoring, URL Filters and Data Loss Prevention were solutions for detecting and preventing unauthorised data migration into and within clouds. There is yet no guarantee for a complete deletion of data by cloud providers on client requests however; FADE may be a solution (Yang et al., 2012)

    Performance testing of distributed computational resources in the software development phase

    Get PDF
    A grid software harmonization is possible through adoption of standards i.e. common protocols and interfaces. In the development phase of standard implementation, the performance testing of grid subsystems can detect hidden software issues which are not detectable using other testing procedures. A simple software solution was proposed which consists of a communication layer, resource consumption agents hosted in computational resources (clients or servers), a database of the performance results and a web interface to visualize the results. Communication between agents, monitoring the resources and main control Python script (supervisor) is possible through the communication layer based on the secure XML-RPC protocol. The resource monitoring agent is a key element of performance testing which provides information about all monitored processes including their child processes. The agent is a simple Python script based on the Python psutil library. The second agent, provided after the resource monitored phase, records data from the resources in the central MySQL database. The results can be queried and visualized using a web interface. The database and data visualization scripts could be considered for a service thus the testers do not need install them to run own tests

    Automatic real-time interpolation of radiation hazards: prototype and system architecture considerations

    Get PDF
    Detecting and monitoring the development of radioactive releases in the atmosphere is important. In many European countries monitoring networks have been established to perform this task. In the Netherlands the National Radioactivity Monitoring network (NRM) was installed. Currently, point maps are used to interpret the data from the NRM. Automatically generating maps in realtime would improve the interpretation of the data by giving the user a clear overview of the present radiological situation and provide an estimate of the radioactivity level at unmeasured locations. In this paper we present a prototype system that automatically generates real-time maps of radioactivity levels and presents results in an interoperable way through a Web Map Service. The system defines a first step towards a emergency management system and is suited primarily for data without large outliers. The automatic interpolation is done using universal kriging in combination with an automatic variogram fitting procedure. The focus is on mathematical and operational issues and on architectural considerations on how to improve the interoperability and portability of the prototype system

    Collection and dissemination of data from environmental monitoring systems in estuaries

    Get PDF
    Environmental monitoring stations providing high frequency data over a multiyear time frame are not common in estuaries. These systems are designed to record extended timeseries at high frequency that are of great value for decision makers and the scientific community. However, the continuous acquisition of good quality data at estuaries is generally challenged by harsh environmental conditions. This contribution describes the main issues for continuous valid data (water quality and currents) acquisition in 2008-2014 with a monitoring station deployed at the Guadiana Estuary and how both near real-time and post-processed data were disseminated using web interfacesinfo:eu-repo/semantics/publishedVersio

    Web2Touch 2019: Semantic Technologies for Smart Information Sharing and Web Collaboration

    Get PDF
    This foreword introduces a summary of themes and papers of the Web2Touch (W2T) 2019 Track at the 28th IEEE WETICE Conference held in Capri, June 2019. W2T 2019 includes ten full papers and one short paper. They all address relevant issues in the field of information sharing for collaboration, including, big data analytics, knowledge engineering, linked open data, applications of smart Web technologies, and smart care. The papers are a portfolio of hot issues in research and applications of semantics, smart technologies (e.g., IoT, sensors, devices for tele-monitoring, and smart contents management) with crucial topics, such as big data analysis, knowledge representation, smart enterprise management, among the others. This track shows how cooperative technologies based on knowledge representation, intelligent tools, and enhanced Web engineering can enhance collaborative work through smart service design and delivery, so it contributes to radically change the role of the semantic Web and applications

    Privacy issues of ISPs in the modern web

    Get PDF
    In recent years, privacy issues in the networking field are getting more important. In particular, there is a lively debate about how Internet Service Providers (ISPs) should collect and treat data coming from passive network measurements. This kind of information, such as flow records or HTTP logs, carries considerable knowledge from several points of view: traffic engineering, academic research, and web marketing can take advantage from passive network measurements on ISP customers. Nevertheless, in many cases collected measurements contain personal and confidential information about customers exposed to monitoring, thus raising several ethical issues. Modern web is very different from the one we experienced few years ago: web services converged to few protocols (i.e., HTTP and HTTPS) and a large share of traffic is encrypted. The aim of this work is to provide an insight about which information is still visible to ISPs, with particular attention to novel and emerging protocols, and to what extent it carries personal information. We illustrate that sensible information, such as website history, is still exposed to passive monitoring. We illustrate privacy and ethical issues deriving by the current situation and provide general guidelines and best practices to cope with the collection of network traffic measurements

    Running a distributed virtual observatory: US Virtual Astronomical Observatory operations

    Get PDF
    Operation of the US Virtual Astronomical Observatory shares some issues with modern physical observatories, e.g., intimidating data volumes and rapid technological change, and must also address unique concerns like the lack of direct control of the underlying and scattered data resources, and the distributed nature of the observatory itself. In this paper we discuss how the VAO has addressed these challenges to provide the astronomical community with a coherent set of science-enabling tools and services. The distributed nature of our virtual observatory-with data and personnel spanning geographic, institutional and regime boundaries-is simultaneously a major operational headache and the primary science motivation for the VAO. Most astronomy today uses data from many resources. Facilitation of matching heterogeneous datasets is a fundamental reason for the virtual observatory. Key aspects of our approach include continuous monitoring and validation of VAO and VO services and the datasets provided by the community, monitoring of user requests to optimize access, caching for large datasets, and providing distributed storage services that allow user to collect results near large data repositories. Some elements are now fully implemented, while others are planned for subsequent years. The distributed nature of the VAO requires careful attention to what can be a straightforward operation at a conventional observatory, e.g., the organization of the web site or the collection and combined analysis of logs. Many of these strategies use and extend protocols developed by the international virtual observatory community.Comment: 7 pages with 2 figures included within PD

    The Luxembourg database of trichothecene type B F. graminearum and F. culmorum producers

    Get PDF
    Data specific to 486 strains belonging to Fusarium graminearum and Fusarium culmorum were manually collected from Luxembourg field monitoring campaigns between the year 2007 ad 2013. It is of interest to store such data in a web-enabled advanced database to help in epidemiological studies. Hence, we describe the design and development of a Fusarium database added to the Luxembourg Microbial Culture Collection (LuxMCC\u2122) web interface at the Luxembourg Institute of Science and Technology (LIST). The database has three main features: (1) filter search, (2) detailed viewer of isolate information, and (3) excel export function of the dataset. Information on fungal strains includes genetic chemotypes, data on selected agronomic factors and crop management issues with geographic localization. The database constitutes a rich source of data for addressing epidemiological issues related to these two species. It will be regularly updated with improved features for advancement and utility

    Theorising Monitoring: Algebraic Models of Web Monitoring in Organisations

    Get PDF
    Our lives are facilitated and mediated by software. Thanks to software, data on nearly everything can be generated, accessed and analysed for all sorts of reasons. Software technologies, combined with political and commercial ideas and practices, have led to a wide range of our activities being monitored, which is the source of concerns about surveillance and privacy. We pose the questions: What is monitoring? Do diverse and disparate monitoring systems have anything in common? What role does monitoring play in contested issues of surveillance and privacy? We are developing an abstract theory for studying monitoring that begins by capturing structures common to many different monitoring practices. The theory formalises the idea that monitoring is a process that observes the behaviour of people and objects in a context. Such entities and their behaviours can be represented by abstract data types and their observable attributes by logics. In this paper, we give a formal model of monitoring based on the idea that behaviour is modelled by streams of data, and apply the model to a social context: the monitoring of web usage by staff and members of an organisation

    Oregon Freight Data Mart

    Get PDF
    Increasing freight volumes are adding pressure to the Oregon transportation system. Monitoring the performance of the transportation system and freight movements is essential to guarantee the economic development of the region, the efficient allocation of resources, and the quality of life of all Oregonians. Freight data is expensive to collect and maintain. Confidentiality issues, the size of the datasets, and the complexity of freight movements are barriers that preclude the easy access and analysis of freight data. Data accessibility and integration is essential to ensure successful freight planning and consistency across regional partner agencies and planning organizations. In relation to Internet-based mapping technology in freight data collection and planning, the main objectives of this project are: (a) address implementation issues associated with data integration, (b) present a system architecture to leverage existing publically-available interfaces and web applications to accelerate product development and reduce costs, (c) describe an existing web-based mapping prototype and its capabilities, (d) state lessons learned and present suggestions to streamline the integration and visualization of freight data, and (e) discuss load-time and display quality issues associated with the visualization of transportation data on internet-based mapping applications. The strategies and methodologies described in this report are equally applicable to the display of areas such as states or counties as well as linear data such linear data such as highways, waterways, and railways. Despite data integration challenges, Internet-based mapping provides a cost effective and appealing tool to store, access, and communicate freight data as well as enhance our understanding of freight issues. Institutional barriers, not technology, are the most demanding hurdles to widely implementing a freight data web-based mapping application in the near future
    • …
    corecore