1,037 research outputs found
Recommended from our members
Multimedia delivery in the future internet
The term âNetworked Mediaâ implies that all kinds of media including text, image, 3D graphics, audio
and video are produced, distributed, shared, managed and consumed on-line through various networks,
like the Internet, Fiber, WiFi, WiMAX, GPRS, 3G and so on, in a convergent manner [1]. This white
paper is the contribution of the Media Delivery Platform (MDP) cluster and aims to cover the Networked
challenges of the Networked Media in the transition to the Future of the Internet.
Internet has evolved and changed the way we work and live. End users of the Internet have been confronted
with a bewildering range of media, services and applications and of technological innovations concerning
media formats, wireless networks, terminal types and capabilities. And there is little evidence that the pace
of this innovation is slowing. Today, over one billion of users access the Internet on regular basis, more
than 100 million users have downloaded at least one (multi)media file and over 47 millions of them do so
regularly, searching in more than 160 Exabytes1 of content. In the near future these numbers are expected
to exponentially rise. It is expected that the Internet content will be increased by at least a factor of 6, rising
to more than 990 Exabytes before 2012, fuelled mainly by the users themselves. Moreover, it is envisaged
that in a near- to mid-term future, the Internet will provide the means to share and distribute (new)
multimedia content and services with superior quality and striking flexibility, in a trusted and personalized
way, improving citizensâ quality of life, working conditions, edutainment and safety.
In this evolving environment, new transport protocols, new multimedia encoding schemes, cross-layer inthe
network adaptation, machine-to-machine communication (including RFIDs), rich 3D content as well as
community networks and the use of peer-to-peer (P2P) overlays are expected to generate new models of
interaction and cooperation, and be able to support enhanced perceived quality-of-experience (PQoE) and
innovative applications âon the moveâ, like virtual collaboration environments, personalised services/
media, virtual sport groups, on-line gaming, edutainment. In this context, the interaction with content
combined with interactive/multimedia search capabilities across distributed repositories, opportunistic P2P
networks and the dynamic adaptation to the characteristics of diverse mobile terminals are expected to
contribute towards such a vision.
Based on work that has taken place in a number of EC co-funded projects, in Framework Program 6 (FP6)
and Framework Program 7 (FP7), a group of experts and technology visionaries have voluntarily
contributed in this white paper aiming to describe the status, the state-of-the art, the challenges and the way
ahead in the area of Content Aware media delivery platforms
Anomaly-based network intrusion detection methods
The article deals with detection of network anomalies. Network anomalies include everything that is quite different from the normal operation. For detection of anomalies were used machine learning systems. Machine learning can be considered as a support or a limited type of artificial intelligence. A machine learning system usually starts with some knowledge and a corresponding knowledge organization so that it can interpret, analyse, and test the knowledge acquired. There are several machine learning techniques available. We tested Decision tree learning and Bayesian networks. The open source data-mining framework WEKA was the tool we used for testing the classify, cluster, association algorithms and for visualization of our results. The WEKA is a collection of machine learning algorithms for data mining tasks
Analysis of intrusion detection system (IDS) in border gateway protocol
University of Technology, Sydney. Faculty of Engineering and Information Technology.Border Gateway Protocol (BGP) is the de-facto inter-domain routing protocol used across
thousands of Autonomous Systems (AS) joined together in the Internet. The main purpose of
BGP is to keep routing information up-to-date across the Autonomous System (AS) and provide
a loop free path to the destination. Internet connectivity plays a vital role in organizations such
as in businesses, universities and government organisations for exchanging information. This
type of information is exchanged over the Internet in the form of packets, which contain the
source and destination addresses. Because the Internet is a dynamic and sensitive system
which changes continuously, it is therefore necessary to protect the system from intruders.
Security has been a major issue for BGP. Nevertheless, BGP suffers from serious threats even
today, DoS attack is the major security threat to the Internet today, among which, is the TCP
SYN flooding, the most common type of attack. The aim of this DoS attack is to consume large
amounts of bandwidth. Any system connected to the Internet and using TCP services are prone
to such attacks. It is important to detect such malicious activities in a network, which could
otherwise cause problems for the availability of services.
This thesis proposes and implements two new security methods for the protection of BGP data
plane, âAnalysis of BGP Security Vulnerabilitiesâ and âBorder Gateway Protocol Anomaly
Detection using Failure Quality Control Methodâ to detect the malicious packets and the
anomaly packets in the network.
The aim of this work is to combine the algorithms with the Network Data Mining (NDM)
method to detect the malicious packets in the BGP network. Furthermore, these patterns can
be used in the database as a signature to capture the incidents in the future
Improving Knowledge-Based Systems with statistical techniques, text mining, and neural networks for non-technical loss detection
Currently, power distribution companies have several problems that are related to energy losses. For
example, the energy used might not be billed due to illegal manipulation or a breakdown in the customerâs
measurement equipment. These types of losses are called non-technical losses (NTLs), and these
losses are usually greater than the losses that are due to the distribution infrastructure (technical losses).
Traditionally, a large number of studies have used data mining to detect NTLs, but to the best of our
knowledge, there are no studies that involve the use of a Knowledge-Based System (KBS) that is created
based on the knowledge and expertise of the inspectors. In the present study, a KBS was built that is
based on the knowledge and expertise of the inspectors and that uses text mining, neural networks,
and statistical techniques for the detection of NTLs. Text mining, neural networks, and statistical techniques
were used to extract information from samples, and this information was translated into rules,
which were joined to the rules that were generated by the knowledge of the inspectors. This system
was tested with real samples that were extracted from Endesa databases. Endesa is one of the most
important distribution companies in Spain, and it plays an important role in international markets in
both Europe and South America, having more than 73 million customers
T2D2: A Time Series Tester, Transformer, and Decomposer Framework for Outlier Detection
The automatic detection of outliers in time series datasets has captured much amount of attention in the data science community. It is not a simple task as the data may have several perspectives, such as sessional, trendy, or a combination of the two. Furthermore, to obtain a reliable and untrustworthy knowledge from the data, the data itself should be understandable. To cope with these challenges, in this paper, we introduce a new framework that can first test the stationarity and seasonality of dataset, then apply a set of Fourier transforms to get the Fourier sample frequencies, which can be used as a support of a decomposer component. The proposed framework, namely TTDD (Test, Transform, Decompose, and Detection), implements the decomposer component that split the dataset into three parts: trend, seasonal, and residual. Finally, the frequency difference detector compares the frequency of the test set to the frequency of the training set determining the periods of discrepancy in the frequency considering them as outlier periods
- âŠ