46 research outputs found
Recommended from our members
Kinesthetics eXtreme: An External Infrastructure for Monitoring Distributed Legacy Systems
Autonomic computing - self-configuring, self-healing, self-optimizing applications, systems and networks - is widely believed to be a promising solution to ever-increasing system complexity and the spiraling costs of human system management as systems scale to global proportions. Most results to date, however, suggest ways to architect new software constructed from the ground up as autonomic systems, whereas in the real world organizations continue to use stovepipe legacy systems and/or build 'systems of systems' that draw from a gamut of new and legacy components involving disparate technologies from numerous vendors. Our goal is to retrofit autonomic computing onto such systems, externally, without any need to understand or modify the code, and in many cases even when it is impossible to recompile. We present a meta-architecture implemented as active middleware infrastructure to explicitly add autonomic services via an attached feedback loop that provides continual monitoring and, as needed, reconfiguration and/or repair. Our lightweight design and separation of concerns enables easy adoption of individual components, as well as the full infrastructure, for use with a large variety of legacy, new systems, and systems of systems. We summarize several experiments spanning multiple domains
A control and management architecture supporting autonomic NFV services
The proposed control, orchestration and management (COM) architecture is presented from a high-level point of view; it enables the dynamic provisioning of services such as network data connectivity or generic network slicing instances based on virtual network functions (VNF). The COM is based on Software Defined Networking (SDN) principles and is hierarchical, with a dedicated controller per technology domain. Along with the SDN control plane for the provisioning of connectivity, an ETSI NFV management and orchestration system is responsible for the instantiation of Network Services, understood in this context as interconnected VNFs. A key, novel component of the COM architecture is the monitoring and data analytics (MDA) system, able to collect monitoring data from the network, datacenters and applications which outputs can be used to proactively reconfigure resources thus adapting to future conditions, like load or degradations. To illustrate the COM architecture, a use case of a Content Delivery Network service taking advantage of the MDA ability to collect and deliver monitoring data is experimentally demonstrated.Peer ReviewedPostprint (author's final draft
Agent-based automated negotiation system for e-marketplaces
Master'sMASTER OF ENGINEERIN
A Holistic Approach to Service Survivability
We present SABER (Survivability Architecture: Block, Evade, React), a proposed survivability architecture that blocks, evades and reacts to a variety of attacks by using several security and survivability mechanisms in an automated and coordinated fashion. Contrary to the ad hoc manner in which contemporary survivable systems are built--using isolated, independent security mechanisms such as firewalls, intrusion detection systems and software sandboxes--SABER integrates several different technologies in an attempt to provide a unified framework for responding to the wide range of attacks malicious insiders and outsiders can launch. This coordinated multi-layer approach will be capable of defending against attacks targeted at various levels of the network stack, such as congestion-based DoS attacks, software-based DoS or code-injection attacks, and others. Our fundamental insight is that while multiple lines of defense are useful, most conventional, uncoordinated approaches fail to exploit the full range of available responses to incidents. By coordinating the response, the ability to survive even in the face of successful security breaches increases substantially. We discuss the key components of SABER, how they will be integrated together, and how we can leverage on the promising results of the individual components to improve survivability in a variety of coordinated attack scenarios. SABER is currently in the prototyping stages, with several interesting open research topics
Persistent Protection in Multicast Content Delivery
Computer networks make it easy to distribute digital media at low cost. Digital rights management (DRM) systems are designed to limit the access that paying subscribers (and non-paying intruders) have to these digital media. However, current DRM systems are tied to unicast delivery mechanisms, which do not scale well to very large groups. In addition, the protection provided by DRM systems is in most cases not persistent, i.e., it does not prevent the legitimate subscriber from re-distributing the digital media after reception.
We have collected the requirements for digital rights management from various sources, and presented them as a set of eleven requirements, associated with five categories. Several examples of commercial DRM systems are briefly explained and the requirements that they meet are presented in tabular format. None of the example systems meet all the requirements that we have listed. The security threats that are faced by DRM systems are briefly discussed. We have discussed approaches for adapting DRM systems to multicast data transmission.
We have explored and evaluated the security protocols of a unicast distribution model, published by Grimen, et al.\, that provides ``persistent protection''. We have found two security attacks and have provided the solution to overcome the discovered attacks. Then we have proposed a more scalable architecture based on the modified model. We call the resulting architecture persistent protection in multicast content delivery. We present and formally validate the protocol for control and data exchange among the interacting parties of our proposal
Towards a self-managed framework for orchestration and integration of devices in AAL
Ambrozas Diana. John B. Thompson (1995). The Media and Modernity : A Social Theory of the Media. In: Communication. Information Médias Théories, volume 18 n°1, décembre 1997. pp. 193-195
PEER TO PEER DIGITAL RIGHTS MANAGEMENT USING BLOCKCHAIN
Content distribution networks deliver content like videos, apps, and music to users through servers deployed in multiple datacenters to increase availability and delivery speed of content. The motivation of this work is to create a content distribution network that maintains a consumerâs rights and access to works they have purchased indefinitely. If a user purchases content from a traditional content distribution network, they lose access to the content when the service is no longer available. The system uses a peer to peer network for content distribution along with a blockchain for digital rights management. This combination may give users indefinite access to purchased works. The system benefits content rights owners because they can sell their content in a lower cost manner by distributing costs among the community of peers
Feasibility study: Electronic media laboratory according to Industry standard
The goal of this Master's thesis was to find out the Metropolia media technology laboratoryâs correlation to industrial electronic media production standards and environments. The laboratory environment for education and training should be following not only media technology curriculum, but also the requirements set for learning audio-visual technology, including the processes, technologies and practices involved in professional electronic production and publishing in Finland. Since the role of the laboratory is to perform as an effective learning environment, there are some understandable restrictions in technology and development, in financial and in management section that prevent it from reaching the industrial level. These limitations and differences between this environment and more sophisticated industrial model were studied in this thesis.
Feasibility study, which is suitable for analyzing the technology, financial and administrative development all together, was selected as the research method. The study was intended to determine the achievable laboratory model of functional validation and also to demonstrate the proof of concept.
Audiovisual production in the laboratory and the subsequent electronic digital publishing was created as an embedded model. The model takes into account the ongoing changes in the television environment and the traditional radio frequency transmission, which is now competing with other alternatives such as IP network distribution. Video editing as post production was analyzed in its current state and compared to a complete network-based-editing NAS and SAN model, which can also be implemented in cloud technologies for decentralized co-operation. This way the model was following the industry trends and future prospects. Research material was collected and acquired from written sources, industry fairs and conferences, resellers and agents as well as production companies and broadcasters.
Practical implementation of electronic media publishing was simulated in the laboratory environment so that all the possible reception technologies were covered. Publishing was customized for compatibility with various devices. Noteworthy aspects in this publishing process were the post-processing, packetizing and distribution related quality management aspects their definition and management issues were also discussed in this thesis.
The end result was a feasibility analysis and model, which defines the electronic publishing technology teaching bottlenecks regarding the equivalence of the technologies in laboratory environment.YAMK-insinöörityön tavoitteena oli selvittÀÀ Metropolian mediatekniikan laboratorion vastaavuus
teollisessa elektronisen median tuotannossa noudatettaviin standardeihin. Laboratorion koulutusympÀristönÀ
tulee noudattaa paitsi mediatekniikan opetussuunnitelmassa audio-visuaaliselle
tekniikalle oppimiselle asetettuja vaatimuksia, myös niitÀ prosesseja, kÀytÀntöjÀ ja teknologioita joita
ammattimainen elektroninen tuotanto ja julkaiseminen Suomessa toteuttaa. OppimisympÀristönÀ
toimivan laboratorion kehittÀmiselle ja varustamiselle on ymmÀrrettÀvÀsti teknisiÀ, taloudellisia ja
hallinnallisia rajoituksia, jotka estÀvÀt sen saavuttamasta teollista tasoa. NÀiden rajojen ja kehittyneemmÀn
teollisen mallin eroja tutkittiin. TutkimusmenetelmÀksi valittiin saavutettavuustutkimus
(eng. feasebility study), joka menetelmÀnÀ soveltuu teknologian, taloudellisen tai hallinnollisen kehittÀmiseen
analysointiin. Tutkimuksella pyrittiin mÀÀrittelemÀÀn saavutettavissa olevan laboratoriomallin
toiminnallinen validointi ja toteutettavuuden osoitus (engl.proof of concept).
Laboratorion audiovisuaalista tuotantoa ja sitÀ seuraavaa elektronista sekÀ digitaalista julkaisemista
varten luotiin sulautettu malli. Mallissa huomioitiin televisioympÀristössÀ meneillÀÀn oleva muutos,
jossa perinteinen radiotaajuuslÀhettÀminen on saanut vaihtoehdoksi IP- verkoilla suoritettavan jakelun.
Videon editointi jÀlkituotantona analysointiin nykytilassa ja vertailtiin sitÀ tÀydellisen verkkoeditoinnin
malleihin, jotka voivat myös toteuttaa pilviteknologioita hajautetussa tuotannollisessa yhteistyössÀ.
NĂ€in medialaboratorion mallinnuksessa huomioitiin toimialan ajan trendit ja tulevaisuuden
nÀkymÀt. TÀmÀ tutkimusaineisto hankittiin kirjallisten lÀhteiden lisÀksi, alan messuilta ja konferensseista,
maahantuojilta ja edustajilta sekÀ tuotanto- ja yleisradioyhtiöiltÀ.
Elektronisen median julkaisemista simuloitiin laboratorioympÀristössÀ niin, ettÀ kaikki mahdolliset
vastaanottotekniikat oli otettu huomioon. Julkaiseminen rÀÀtÀlöitiin myös soveltuvaksi eri pÀÀtelaitteille.
Julkaisemisessa huomionarvoisiksi asioiksi nousivat audiovisuaalisen sisÀllön jÀlkiprosessointiin,
paketointiin ja lÀhettÀmiseen liittyvÀt tekniset laadunhallinnan aspektit, esimerkiksi eri videonpakkausalgoritmien
soveltuvuus videosisÀllön jakelussa, julkaisemisen automatisoinnin mahdollistaminen
ja eri julkaisualustojen tutkiminen ja mÀÀrittely Lopputuloksena saatiin aikaan soveltuvuusanalyysi
ja mallinnus, jossa mÀÀriteltiin elektronisen julkaisemisen teknisen opetuksen pullonkaulat
teolliseen vastaavuuteen nÀhden. Suurimmaksi kompastuskiveksi voidaan mainita jakelujÀrjestelmÀn
operoinnin hankaluus sekÀ verkotetun editointijÀrjestelmÀn mÀÀrittelyn haasteellisuus