55 research outputs found

    Internet Governance: the State of Play

    Get PDF
    The Global Forum on Internet Governance held by the UNICT Task Force in New York on 25-26 March concluded that Internet governance issues were many and complex. The Secretary-General's Working Group on Internet Governance will have to map out and navigate this complex terrain as it makes recommendations to the World Summit on an Information Society in 2005. To assist in this process, the Forum recommended, in the words of the Deputy Secretary-General of the United Nations at the closing session, that a matrix be developed "of all issues of Internet governance addressed by multilateral institutions, including gaps and concerns, to assist the Secretary-General in moving forward the agenda on these issues." This paper takes up the Deputy Secretary-General's challenge. It is an analysis of the state of play in Internet governance in different forums, with a view to showing: (1) what issues are being addressed (2) by whom, (3) what are the types of consideration that these issues receive and (4) what issues are not adequately addressed

    VoIP under the EU regulatory framework : preventing foreclosure?

    Get PDF
    In June 2004, the European Commission (EC) issued an "Information and Consultation Document" (European Commission 2004) that discussed how the Regulatory Framework of the European Union (EU) should be adapted to accommodate Voice over IP (VoIP) and invited relevant parties to comment on the Consultation Document. In our study, we use the responses of the different market parties to identify how incumbents seek to foreclose the market for VoIP telephony. From these responses we conclude that foreclosure is not only attempted by setting high prices for the use of infrastructure, but also by the strategic choice of infrastructure technology, which raises the cost of entry. We label the latter form of foreclosure "technological foreclosure" – as opposed to "market foreclosure". A simple modeling exercise shows that regulators seeking to avoid market foreclosure might trigger technological foreclosure. We argue that this has happened with the unbundling of the local loop in the EU, and that it might happen again with the transition to VoIP. We conclude that the current rights and obligations assigned to telecom companies effectively protect incumbents from competition by VoIP entrants. Moreover, the inaction of regulatory authorities when it comes to numbering and communication protocols is advantageous for incumbents and might obstruct the provision of new services in the future

    IVOA Recommendation: Observation Data Model Core Components and its Implementation in the Table Access Protocol Version 1.0

    Full text link
    This document defines the core components of the Observation data model that are necessary to perform data discovery when querying data centers for observations of interest. It exposes use-cases to be carried out, explains the model and provides guidelines for its implementation as a data access service based on the Table Access Protocol (TAP). It aims at providing a simple model easy to understand and to implement by data providers that wish to publish their data into the Virtual Observatory. This interface integrates data modeling and data access aspects in a single service and is named ObsTAP. It will be referenced as such in the IVOA registries. There will be a separate document to cover the full Observation data model. In this document, the Observation Data Model Core Components (ObsCoreDM) defines the core components of queryable metadata required for global discovery of observational data. It is meant to allow a single query to be posed to TAP services at multiple sites to perform global data discovery without having to understand the details of the services present at each site. It defines a minimal set of basic metadata and thus allows for a reasonable cost of implementation by data providers. The combination of the ObsCoreDM with TAP is referred to as an ObsTAP service. As with most of the VO Data Models, ObsCoreDM makes use of STC, Utypes, Units and UCDs. The ObsCoreDM can be serialized as a VOTable. ObsCoreDM can make reference to more complete data models such as ObsProvDM (the Observation Provenance Data Model, to come), Characterisation DM, Spectrum DM or Simple Spectral Line Data Model (SSLDM).Comment: About the IVOA: http://www.ivoa.net; editors: Doug Tody, Alberto Micol, Daniel Durand, Mireille Louy

    Provision of adaptive and context-aware service discovery for the Internet of Things

    Get PDF
    The IoT concept has revolutionised the vision of the future Internet with the advent of standards such as 6LoWPAN making it feasible to extend the Internet into previously isolated environments, e.g., WSNs. The abstraction of resources as services, has opened these environments to a new plethora of potential applications. Moreover, the web service paradigm can be used to provide interoperability by offering a standard interface to interact with these services to enable WoT paradigm. However, these networks pose many challenges, in terms of limited resources, that make the adaptability of existing IP-based solutions infeasible. As traditional service discovery and selection solutions demand heavy communication and use bulky formats, which are unsuitable for these resource-constrained devices incorporating sleep cycles to save energy. Even a registry based approach exhibits burdensome traffic in maintaining the availability status of the devices. The feasible solution for service discovery and selection is instrumental to enable the wide application coverage of these networks in the future. This research project proposes, TRENDY, a new compact and adaptive registry-based SDP with context awareness for the IoT, with more emphasis given to constrained networks, e.g., 6LoWPAN It uses CoAP-based light-weight and RESTful web services to provide standard interoperable interfaces, which can be easily translated from HTTP. TRENDY's service selection mechanism collects and intelligently uses the context information to select appropriate services for user applications based on the available context information of users and services. In addition, TRENDY introduces an adaptive timer algorithm to minimise control overhead for status maintenance, which also reduces energy consumption. Its context-aware grouping technique divides the network at the application layer, by creating location-based groups. This grouping of nodes localises the control overhead and provides the base for service composition, localised aggregation and processing of data. Different grouping roles enable the resource-awareness by offering profiles with varied responsibilities, where high capability devices can implement powerful profiles to share the load of other low capability devices. Thus, it allows the productive usage of network resources. Furthermore, this research project proposes APPUB, an adaptive caching technique, that has the following benefits: it allows service hosts to share their load with the resource directory and also decreases the service invocation delay. The performance of TRENDY and its mechanisms is evaluated using an extensive number of experiments performed using emulated Tmote sky nodes in the COOJA environment. The analysis of the results validates the benefit of performance gain for all techniques. The service selection and APPUB mechanisms improve the service invocation delay considerably that, consequently, reduces the traffic in the network. The timer technique consistently achieved the lowest control overhead, which eventually decreased the energy consumption of the nodes to prolong the network lifetime. Moreover, the low traffic in dense networks decreases the service invocations delay, and makes the solution more scalable. The grouping mechanism localises the traffic, which increases the energy efficiency while improving the scalability. In summary, the experiments demonstrate the benefit of using TRENDY and its techniques in terms of increased energy efficiency and network lifetime, reduced control overhead, better scalability and optimised service invocation time

    An approach to building a secure and persistent distributed object management system

    Full text link
    The Common Object Request Broker Architecture (CORBA) proposed by the Object Management Group (OMG) is a widely accepted standard to provide a system level framework in design and implementation of distributed objects. The core of the Object Management Architecture (OMA) is an Object Request Broker (ORB), which provides transparency of object location, activation, and communications. However, the specification provided by the OMG is not sufficient. For instance, there are no security specifications when handling object requests through the ORBs. The lack of such a security service prevents the use of CORBA from handling sensitive data such as personal and corporate financial information; In view of the above, this thesis identifies, explores, and provides an approach to handling secure objects in a distributed environment along with a persistent object service using the CORBA specification. The research specifically involves the design and implementation of a secured distributed object service. This object service requires a persistent service and object storage for storing and retrieving security specific information. To provide a secure distributed object environment, a secure object service using the specifications provided by the OMG has been designed and implemented. In addition, to preserve the persistence of secure information, an object service has been implemented to provide a persistent data store; The secure object service can provide a framework for handling distributed object in applications requiring security clearance such as distributed banking, online stock tradings, internet shopping, geographic and medical information systems

    Modélisation formelle des systèmes de détection d'intrusions

    Get PDF
    L’écosystème de la cybersécurité évolue en permanence en termes du nombre, de la diversité, et de la complexité des attaques. De ce fait, les outils de détection deviennent inefficaces face à certaines attaques. On distingue généralement trois types de systèmes de détection d’intrusions : détection par anomalies, détection par signatures et détection hybride. La détection par anomalies est fondée sur la caractérisation du comportement habituel du système, typiquement de manière statistique. Elle permet de détecter des attaques connues ou inconnues, mais génère aussi un très grand nombre de faux positifs. La détection par signatures permet de détecter des attaques connues en définissant des règles qui décrivent le comportement connu d’un attaquant. Cela demande une bonne connaissance du comportement de l’attaquant. La détection hybride repose sur plusieurs méthodes de détection incluant celles sus-citées. Elle présente l’avantage d’être plus précise pendant la détection. Des outils tels que Snort et Zeek offrent des langages de bas niveau pour l’expression de règles de reconnaissance d’attaques. Le nombre d’attaques potentielles étant très grand, ces bases de règles deviennent rapidement difficiles à gérer et à maintenir. De plus, l’expression de règles avec état dit stateful est particulièrement ardue pour reconnaître une séquence d’événements. Dans cette thèse, nous proposons une approche stateful basée sur les diagrammes d’état-transition algébriques (ASTDs) afin d’identifier des attaques complexes. Les ASTDs permettent de représenter de façon graphique et modulaire une spécification, ce qui facilite la maintenance et la compréhension des règles. Nous étendons la notation ASTD avec de nouvelles fonctionnalités pour représenter des attaques complexes. Ensuite, nous spécifions plusieurs attaques avec la notation étendue et exécutons les spécifications obtenues sur des flots d’événements à l’aide d’un interpréteur pour identifier des attaques. Nous évaluons aussi les performances de l’interpréteur avec des outils industriels tels que Snort et Zeek. Puis, nous réalisons un compilateur afin de générer du code exécutable à partir d’une spécification ASTD, capable d’identifier de façon efficiente les séquences d’événements.Abstract : The cybersecurity ecosystem continuously evolves with the number, the diversity, and the complexity of cyber attacks. Generally, we have three types of Intrusion Detection System (IDS) : anomaly-based detection, signature-based detection, and hybrid detection. Anomaly detection is based on the usual behavior description of the system, typically in a static manner. It enables detecting known or unknown attacks but also generating a large number of false positives. Signature based detection enables detecting known attacks by defining rules that describe known attacker’s behavior. It needs a good knowledge of attacker behavior. Hybrid detection relies on several detection methods including the previous ones. It has the advantage of being more precise during detection. Tools like Snort and Zeek offer low level languages to represent rules for detecting attacks. The number of potential attacks being large, these rule bases become quickly hard to manage and maintain. Moreover, the representation of stateful rules to recognize a sequence of events is particularly arduous. In this thesis, we propose a stateful approach based on algebraic state-transition diagrams (ASTDs) to identify complex attacks. ASTDs allow a graphical and modular representation of a specification, that facilitates maintenance and understanding of rules. We extend the ASTD notation with new features to represent complex attacks. Next, we specify several attacks with the extended notation and run the resulting specifications on event streams using an interpreter to identify attacks. We also evaluate the performance of the interpreter with industrial tools such as Snort and Zeek. Then, we build a compiler in order to generate executable code from an ASTD specification, able to efficiently identify sequences of events

    Automating Industrial Event Stream Analytics: Methods, Models, and Tools

    Get PDF
    Industrial event streams are an important cornerstone of Industrial Internet of Things (IIoT) applications. For instance, in the manufacturing domain, such streams are typically produced by distributed industrial assets at high frequency on the shop floor. To add business value and extract the full potential of the data (e.g. through predictive quality assessment or maintenance), industrial event stream analytics is an essential building block. One major challenge is the distribution of required technical and domain knowledge across several roles, which makes the realization of analytics projects time-consuming and error-prone. For instance, accessing industrial data sources requires a high level of technical skills due to a large heterogeneity of protocols and formats. To reduce the technical overhead of current approaches, several problems must be addressed. The goal is to enable so-called "citizen technologists" to evaluate event streams through a self-service approach. This requires new methods and models that cover the entire data analytics cycle. In this thesis, the research question is answered, how citizen technologists can be facilitated to independently perform industrial event stream analytics. The first step is to investigate how the technical complexity of modeling and connecting industrial data sources can be reduced. Subsequently, it is analyzed how the event streams can be automatically adapted (directly at the edge), to meet the requirements of data consumers and the infrastructure. Finally, this thesis examines how machine learning models for industrial event streams can be trained in an automated way to evaluate previously integrated data. The main research contributions of this work are: 1. A semantics-based adapter model to describe industrial data sources and to automatically generate adapter instances on edge nodes. 2. An extension for publish-subscribe systems that dynamically reduces event streams while considering requirements of downstream algorithms. 3. A novel AutoML approach to enable citizen data scientists to train and deploy supervised ML models for industrial event streams. The developed approaches are fully implemented in various high-quality software artifacts. These have been integrated into a large open-source project, which enables rapid adoption of the novel concepts into real-world environments. For the evaluation, two user studies to investigate the usability, as well as performance and accuracy tests of the individual components were performed

    Web Publications

    Get PDF
    The primary objective of this specification is to define requirements for the production of Web Publications. In doing so, it also defines a framework for creating packaged publication formats, such as EPUB and audiobooks, where a pathway to the Web is highly desirable but not necessarily the primary method of interchange or consumption

    Dados científicos e metadados : estudo sobre o uso dos padrões de metadados no fluxo da informação científica sobre biodiversidade

    Get PDF
    Dissertação (mestrado)—Universidade de Brasília, Faculdade de Estudos Sociais Aplicados, Departamento de Ciência da Informação e Documentação, 2020.Há uma demanda crescente por acesso aos dados de pesquisa nas comunidades científicas e também em diferentes esferas do governo e segmentos da sociedade, tendo em vista a valorização desse recurso para melhor fundamentação de análises e decisões. Os avanços científicos e tecnológicos facilitam a execução de tarefas, acelerando o processamento, a comunicação e o crescimento exponencial da informação. Esses avanços conduziram à explosão digital da informação com uma expansão acelerada do volume de dados de pesquisa. Com isso aumenta também a necessidade de promover a integração, uso e circulação adequada desses recursos informacionais. Apesar da rapidez em que ocorrem os avanços tecnológicos, os processos tradicionais que organizam o fluxo da informação e conhecimento científicos precisam ser aperfeiçoados para promover maior disponibilidade e legibilidade à pesquisa científica. É cada vez mais necessário o direcionamento de esforços para aperfeiçoar a gestão, comunicação e governança da informação científica, por meio de modelos que valorizem os processos envolvidos no tratamento dos dados pesquisa. Considerando essa tendência, o presente estudo analisa a o uso dos metadados para descrever os dados de pesquisas. Devido à dimensão expressiva desse universo de dados, o escopo dessa dissertação restringe-se aos metadados referentes à biodiversidade. Para tanto, foram identificados os padrões de metadados amplamente recomendados e utilizados pelas comunidades de pesquisa e comparados a evidências de uso dos metadados extraídos de artigos da literatura científica sobre o tema e dos registros de metadados de conjuntos de dados disponíveis no portal da rede Global Biodiversity Information Facility (GBIF). Do ponto de vista metodológico, trata-se de uma pesquisa qualitativa e descritiva, com estratégia de coleta de dados bibliográfica e documental. As buscas por artigos e registros dos metadados foram embasadas na adaptação do método de recuperação de informação realizada pouco a pouco (berrypicking). A partir dos resultados encontrados delineouse o panorama uso de metadados no registro dos dados da pesquisa científica sobre biodiversidade. Além disso, foram identificadas tendências voltadas ao aperfeiçoamento na gestão, comunicação e governança dos dados de pesquisa sobre biodiversidade, já adotadas ou em processo de internalização por diversas comunidades de pesquisa envolvidas nessa temática.Making data accessible allows further research, provides information for decision-making and contributes to transparency in science. Ensuring data is properly managed and shared is the key to a high-quality research environment in terms of scientific breakthroughs, generating worldleading research and enabling more international research collaborations. However, making data accessible, understandable and truly reusable remains a challenge. Publishing datasets is a timeconsuming process that is often seen as a courtesy, rather than a necessary step in the research process. Acknowledging such a significant effort involving the management and publication of a dataset remains a flimsy, not well established practice in the scientific community. Over the last decade, many biodiversity informatics initiatives at global, regional and local scales have emerged with a clear goal: to compile and share data, making science opens worldwide. This study presents a review about the scientific communication of the global biodiversity datasets analyzed under the perspective of metadata use throughout the scientific information flow. The proposed methodology aims to establish a baseline by identifying the internationally recommended metadata standards and compare it with existent metadata evidences. This study is grounded in a qualitative and descriptive research of metadata evidences that was guided by the berry picking method. Furthermore, the systematic review and meta-synthetic analysis were defined as methods to collect and analyze metadata evidences extracted from scientific papers and metadata records of Global Biodiversity Information Facility (GBIF) datasets. By investigating literature papers and metadata records of biodiversity datasets the metadata use was outlined in a comparative analysis view to recommended metadata standards. It was also identified best practices that have been adopted in the management, communication and governance of biodiversity research dat
    • …
    corecore