1,513 research outputs found

    Multi-Media Mail in heterogeneous Networks

    Full text link
    The MIME approach seems to be the most reasonable effort for allowing the sending and receiving of multimedia messages using standard Internet mail transport facilities. Providing new header fields, such as MIME-Version, Content-Type, and Content- Transfer-Encoding, it is now possible to include various kinds of information types, e.g. audio, images, richtext, or video, into a RFC 822-conformant mail. Making use of these headers, it is possible to fully describe an attached body part, so that a receiving mail user agent is able to display it without any loss of information. Additionally, the definition of the "multipart" and "message" content types allows the creation of hierarchical structured mails, e.g. a message containing two alternative parts of information, one that can be shown using a simple ASCII-terminal, the other to be displayed on a multimedia workstation. Allowing the definition of bilaterally defined content types and providing a standardized means of establishing new content types prevent MIME from being a one-way road and supply mechanisms to extend MIME for future use

    Repository Replication Using NNTP and SMTP

    Full text link
    We present the results of a feasibility study using shared, existing, network-accessible infrastructure for repository replication. We investigate how dissemination of repository contents can be ``piggybacked'' on top of existing email and Usenet traffic. Long-term persistence of the replicated repository may be achieved thanks to current policies and procedures which ensure that mail messages and news posts are retrievable for evidentiary and other legal purposes for many years after the creation date. While the preservation issues of migration and emulation are not addressed with this approach, it does provide a simple method of refreshing content with unknown partners.Comment: This revised version has 24 figures and a more detailed discussion of the experiments conducted by u

    Web browser for delay-tolerant networks

    Get PDF
    Due to growth of the Internet, the number of devices increasing and the structure of networks becoming more complex, the problem of time delays during information transmissions has arisen. In environments with long transmission delays modern protocols may become inefficient or even useless. Delay-tolerant Networking (DTN) is one approach that allows to solve the problem of long transmission delay times. In the thesis, an approach to web access in such networks is proposed. The problem of data transmission in the networks with long delays is considered. Special methods exist for data transmission in computer networks. But traditional data transmission protocols do not work well in networks with long delays, e.g. when transmitting over long distances, such as in space, or when connectivity may be disrupted, such as in mobile networks. It is necessary, therefore, to replace TCP and to change the existing web protocol (Hypertext Transfer Protocol - HTTP) in order to allow HTTP data transmissions in DTN environments. In the thesis, HTTP is analyzed and an adaptation of HTTP to DTN environments, as proposed in earlier research, is reviewed and extended further. A client part is created and the implementation is described. The client allows solving the problem of HTTP over DTN usage. An open-source browser is modified and the necessary extensions are developed. The extensions allow to use the DTN transport protocol (i.e. the Bundle Protocol) as another option of transport other than TCP. The software module for a web browser is built on the Mozilla platform. It was shown that it is possible to create a browser to work in DTNs

    Identifying Web Tables - Supporting a Neglected Type of Content on the Web

    Full text link
    The abundance of the data in the Internet facilitates the improvement of extraction and processing tools. The trend in the open data publishing encourages the adoption of structured formats like CSV and RDF. However, there is still a plethora of unstructured data on the Web which we assume contain semantics. For this reason, we propose an approach to derive semantics from web tables which are still the most popular publishing tool on the Web. The paper also discusses methods and services of unstructured data extraction and processing as well as machine learning techniques to enhance such a workflow. The eventual result is a framework to process, publish and visualize linked open data. The software enables tables extraction from various open data sources in the HTML format and an automatic export to the RDF format making the data linked. The paper also gives the evaluation of machine learning techniques in conjunction with string similarity functions to be applied in a tables recognition task.Comment: 9 pages, 4 figure

    HTTP Protocol for Teaching HW/SW Platform FITKit

    Get PDF
    Cílem bakalářské práce je implementovat protokol HTTP pro výukovou platformu FITkit. Po představení FITkitu a jeho částí se práce zaměřuje na implementační detaily protokolu HTTP, jako jsou jeho verze, základy komunikace, stavové hlášky a ověřování klientů. Implementace je založena na API knihovnách libfitkit a libkitclient, které byli vytvořeny pro FITkit.The goal of the bachelor thesis is the implementation of the HTTP protocol for the FITkit teaching platform. After introducing the FITkit and its components, the work deals with the Hypertext Transfer Protocol (HTTP), such as the versioning, communication concepts, status signaling. The implementation is based on the libfitkit and libkitclient APIs, developed by the FITkit team.

    Clustering and classification methods for spam analysis

    Get PDF
    Spam emails are a major tool for criminals to distribute malware, conduct fraudulent activity, sell counterfeit products, etc. Thus, security companies are interested in researching spam. Unfortunately, due to the spammers' detection-avoidance techniques, most of the existing tools for spam analysis are not able to provide accurate information about spam campaigns. Moreover, they are not able to link together campaigns initiated by the same sender. F-Secure, a cybersecurity company, collects vast amounts of spam for analysis. The threat intelligence collection from these messages currently involves a lot of manual work. In this thesis we apply state-of-the-art data-analysis techniques to increase the level of automation in the analysis process, thus enabling the human experts to focus on high-level information such as campaigns and actors. The thesis discusses a novel method of spam analysis in which email messages are clustered by different characteristics and the clusters are presented as a graph. The graph representation allows the analyst to see evolving campaigns and even connections between related messages which themselves have no features in common. This makes our analysis tool more powerful than previous methods that simply cluster emails to sets. We implemented a proof of concept version of the analysis tool to evaluate the usefulness of the approach. Experiments show that the graph representation and clustering by different features makes it possible to link together large and complex spam campaigns that were previously not detected. The tools also found evidence that different campaigns were likely to be organized by the same spammer. The results indicate that the graph-based approach is able to extract new, useful information about spam campaigns

    Grounding semantic web services with rules

    Get PDF
    Semantic web services achieve effects in the world through web services, so the connection to those services - the grounding - is of paramount importance. The established technique is to use XML-based translations between ontologies and the SOAP message formats of the services, but these mappings cannot address the growing number of non-SOAP services, and step outside the ontological world to describe the mapping. We present an approach which draws the service's interface into the ontology: we define ontology objects which represent the whole HTTP message, and use backward-chaining rules to translate between semantic service invocation instances and the HTTP messages passed to and from the service. We present a case study using Amazon's popular Simple Storage Service
    corecore