1,893 research outputs found
Global-Scale Resource Survey and Performance Monitoring of Public OGC Web Map Services
One of the most widely-implemented service standards provided by the Open
Geospatial Consortium (OGC) to the user community is the Web Map Service (WMS).
WMS is widely employed globally, but there is limited knowledge of the global
distribution, adoption status or the service quality of these online WMS
resources. To fill this void, we investigated global WMSs resources and
performed distributed performance monitoring of these services. This paper
explicates a distributed monitoring framework that was used to monitor 46,296
WMSs continuously for over one year and a crawling method to discover these
WMSs. We analyzed server locations, provider types, themes, the spatiotemporal
coverage of map layers and the service versions for 41,703 valid WMSs.
Furthermore, we appraised the stability and performance of basic operations for
1210 selected WMSs (i.e., GetCapabilities and GetMap). We discuss the major
reasons for request errors and performance issues, as well as the relationship
between service response times and the spatiotemporal distribution of client
monitoring sites. This paper will help service providers, end users and
developers of standards to grasp the status of global WMS resources, as well as
to understand the adoption status of OGC standards. The conclusions drawn in
this paper can benefit geospatial resource discovery, service performance
evaluation and guide service performance improvements.Comment: 24 pages; 15 figure
An Overlay Architecture for Personalized Object Access and Sharing in a Peer-to-Peer Environment
Due to its exponential growth and decentralized nature, the Internet has evolved into a chaotic repository, making it difficult for users to discover and access resources of interest to them. As a result, users have to deal with the problem of information overload. The Semantic Web's emergence provides Internet users with the ability to associate explicit, self-described semantics with resources. This ability will facilitate in turn the development of ontology-based resource discovery tools to help users retrieve information in an efficient manner. However, it is widely believed that the Semantic Web of the future will be a complex web of smaller ontologies, mostly created by various groups of web users who share a similar interest, referred to as a Community of Interest. This thesis proposes a solution to the information overload problem using a user driven framework, referred to as a Personalized Web, that allows individual users to organize themselves into Communities of Interests based on ontologies agreed upon by all community members. Within this framework, users can define and augment their personalized views of the Internet by associating specific properties and attributes to resources and defining constraint-functions and rules that govern the interpretation of the semantics associated with the resources. Such views can then be used to capture the user's interests and integrate these views into a user-defined Personalized Web. As a proof of concept, a Personalized Web architecture that employs ontology-based semantics and a structured Peer-to-Peer overlay network to provide a foundation of semantically-based resource indexing and advertising is developed. In order to investigate mechanisms that support the resource advertising and retrieval of the Personalized Web architecture, three agent-driven advertising and retrieval schemes, the Aggressive scheme, the Crawler-based scheme, and the Minimum-Cover-Rule scheme, were implemented and evaluated in both stable and churn environments. In addition to the development of a Personalized Web architecture that deals with typical web resources, this thesis used a case study to explore the potential of the Personalized Web architecture to support future web service workflow applications. The results of this investigation demonstrated that the architecture can support the automation of service discovery, negotiation, and invocation, allowing service consumers to actualize a personalized web service workflow. Further investigation will be required to improve the performance of the automation and allow it to be performed in a secure and robust manner. In order to support the next generation Internet, further exploration will be needed for the development of a Personalized Web that includes ubiquitous and pervasive resources
The Hierarchic treatment of marine ecological information from spatial networks of benthic platforms
Measuring biodiversity simultaneously in different locations, at different temporal scales, and over wide spatial scales is of strategic importance for the improvement of our understanding of the functioning of marine ecosystems and for the conservation of their biodiversity. Monitoring networks of cabled observatories, along with other docked autonomous systems (e.g., Remotely Operated Vehicles [ROVs], Autonomous Underwater Vehicles [AUVs], and crawlers), are being conceived and established at a spatial scale capable of tracking energy fluxes across benthic and pelagic compartments, as well as across geographic ecotones. At the same time, optoacoustic imaging is sustaining an unprecedented expansion in marine ecological monitoring, enabling the acquisition of new biological and environmental data at an appropriate spatiotemporal scale. At this stage, one of the main problems for an effective application of these technologies is the processing, storage, and treatment of the acquired complex ecological information. Here, we provide a conceptual overview on the technological developments in the multiparametric generation, storage, and automated hierarchic treatment of biological and environmental information required to capture the spatiotemporal complexity of a marine ecosystem. In doing so, we present a pipeline of ecological data acquisition and processing in different steps and prone to automation. We also give an example of population biomass, community richness and biodiversity data computation (as indicators for ecosystem functionality) with an Internet Operated Vehicle (a mobile crawler). Finally, we discuss the software requirements for that automated data processing at the level of cyber-infrastructures with sensor calibration and control, data banking, and ingestion into large data portals.Peer ReviewedPostprint (published version
Recommended from our members
Digital Creativity Support for Original Journalism
The decline in circulations and revenues resulting from the digitalization of news production and consumption has led to a crisis in journalism.Journalists have less time to research, investigate and write original stories, leading to problems for our democratic processes and holding the powerful to account. This paper reports the architecture, features and rationale for new digital creativity support designed to support journalists to discover more original angles onstories. It also summarises the evaluation of the tool’s use in 3 newsrooms
A novel defense mechanism against web crawler intrusion
Web robots also known as crawlers or spiders are used by search engines, hackers and spammers to gather information about web pages. Timely detection and prevention of unwanted crawlers increases privacy and security of websites. In this research, a novel method to identify web crawlers is proposed to prevent unwanted crawler to access websites. The proposed method suggests a five-factor identification process to detect unwanted crawlers. This study provides the pretest and posttest results along with a systematic evaluation of web pages with the proposed identification technique versus web pages without the proposed identification process. An experiment was performed with repeated measures for two groups with each group containing ninety web pages. The outputs of the logistic regression analysis of treatment and control groups confirm the novel five-factor identification process as an effective mechanism to prevent unwanted web crawlers. This study concluded that the proposed five distinct identifier process is a very effective technique as demonstrated by a successful outcome
A Review on Web Crawling System for Web Databases
As deep web develops at a quick pace, there has been expanded enthusiasm for strategies that assistance effectively find deep-web interfaces. Nonetheless, because of the extensive volume of web assets and the dynamic idea of deep web, accomplishing wide scope and high effectiveness is a testing issue. In this task propose a three-stage framework, for proficient reaping deep web interfaces. In the principal stage, web crawler performs website based scanning for focus pages with the assistance of web search tools, abstaining from going by a substantial number of pages. In this paper we have made an overview on how web crawler functions and what are the approaches accessible in existing framework from various scientists
Hybrid focused crawling on the Surface and the Dark Web
Focused crawlers enable the automatic discovery of Web resources about a given topic by automatically navigating
through the Web link structure and selecting the hyperlinks to follow by estimating their relevance to the topic of
interest. This work proposes a generic focused crawling framework for discovering resources on any given topic
that reside on the Surface or the Dark Web. The proposed crawler is able to seamlessly navigate through the
Surface Web and several darknets present in the Dark Web (i.e., Tor, I2P, and Freenet) during a single crawl by
automatically adapting its crawling behavior and its classifier-guided hyperlink selection strategy based on the
destination network type and the strength of the local evidence present in the vicinity of a hyperlink. It investigates
11 hyperlink selection methods, among which a novel strategy proposed based on the dynamic linear combination
of a link-based and a parent Web page classifier. This hybrid focused crawler is demonstrated for the discovery of
Web resources containing recipes for producing homemade explosives. The evaluation experiments indicate the
effectiveness of the proposed focused crawler both for the Surface and the Dark Web
Smart Three Phase Crawler for Mining Deep Web Interfaces
As deep web develops at a quick pace, there has been expanded enthusiasm for strategies that assistance effectively find deep-web interfaces. Nonetheless, because of the extensive volume of web assets and the dynamic idea of deep web, accomplishing wide scope and high effectiveness is a testing issue. In this task propose a three-stage framework, for proficient reaping deep web interfaces. In the principal stage, web crawler performs website based scanning for focus pages with the assistance of web search tools, abstaining from going by a substantial number of pages. In this paper we have made an overview on how web crawler functions and what are the approaches accessible in existing framework from various scientists
- …