60 research outputs found
Crosslayer Survivability in Overlay-IP-WDM Networks
As the Internet moves towards a three-layer architecture consisting of overlay networks on top of the IP network layer on top of WDM-based physical networks, incorporating the interaction between and among network layers is crucial for efficient and effective implementation of survivability.This dissertation has four major foci as follows: First, a first-of-its-kind analysis of the impact of overlay network dependency on the lower layer network unveils that backhaul, a link loop that occurs at any two or more lower layers below the layer where traffic is present, could happen. This prompts our proposal of a crosslayer survivable mapping to highlight such challenges and to offer survivability in an efficient backhaul-free way. The results demonstrate that the impact of layer dependency is more severe than initially anticipated making it clear that independent single layer network design is inadequate to assure service guarantees and efficient capacity allocation. Second, a forbidden link matrix is proposed masking part of the network for use in situations where some physical links are reserved exclusively for a designated service, mainly for the context of providing multiple levels of differentiation on the network use and service guarantee. The masking effect is evaluated on metrics using practical approaches in a sample real-world network, showing that both efficiency and practicality can be achieved. Third, matrix-based optimization problem formulations of several crosslayer survivable mappings are presented; examples on the link availability mapping are particularly illustrated. Fourth, survivability strategies for two-layer backbone networks where traffic originates at each layer are investigated. Optimization-based formulations of performing recovery mechanisms at each layer for both layers of traffic are also presented. Numerical results indicate that, in such a wavelength-based optical network, implementing survivability of all traffic at the bottom layer can be a viable solution with significant advantages.This dissertation concludes by identifying a roadmap of potential future work for crosslayer survivability in layered network settings
The 1990 Goddard Conference on Space Applications of Artificial Intelligence
The papers presented at the 1990 Goddard Conference on Space Applications of Artificial Intelligence are given. The purpose of this annual conference is to provide a forum in which current research and development directed at space applications of artificial intelligence can be presented and discussed. The proceedings fall into the following areas: Planning and Scheduling, Fault Monitoring/Diagnosis, Image Processing and Machine Vision, Robotics/Intelligent Control, Development Methodologies, Information Management, and Knowledge Acquisition
Secure and efficient storage of multimedia: content in public cloud environments using joint compression and encryption
The Cloud Computing is a paradigm still with many unexplored areas ranging from the
technological component to the de nition of new business models, but that is revolutionizing the way we design, implement and manage the entire infrastructure of information technology.
The Infrastructure as a Service is the delivery of computing infrastructure, typically a virtual data center, along with a set of APIs that allow applications, in an automatic way, can control the resources they wish to use. The choice of the service provider and how it applies to their business model may lead to higher or lower cost in the operation and maintenance of applications near the suppliers.
In this sense, this work proposed to carry out a literature review on the topic of Cloud
Computing, secure storage and transmission of multimedia content, using lossless compression, in public cloud environments, and implement this system by building an application that manages data in public cloud environments (dropbox and meocloud).
An application was built during this dissertation that meets the objectives set. This system provides the user a wide range of functions of data management in public cloud environments, for that the user only have to login to the system with his/her credentials, after performing the login, through the Oauth 1.0 protocol (authorization protocol) is generated an access token, this token is generated only with the consent of the user and allows the application to get access to data/user les without having to use credentials. With this token the framework can now operate and unlock the full potential of its functions. With this application
is also available to the user functions of compression and encryption so that user can make the most of his/her cloud storage system securely. The compression function works using the compression algorithm LZMA being only necessary for the user to choose the les to be compressed.
Relatively to encryption it will be used the encryption algorithm AES (Advanced Encryption Standard) that works with a 128 bit symmetric key de ned by user.
We build the research into two distinct and complementary parts: The rst part consists
of the theoretical foundation and the second part is the development of computer application where the data is managed, compressed, stored, transmitted in various environments of cloud computing. The theoretical framework is organized into two chapters, chapter 2 - Background
on Cloud Storage and chapter 3 - Data compression.
Sought through theoretical foundation demonstrate the relevance of the research, convey some of the pertinent theories and input whenever possible, research in the area. The second part of the work was devoted to the development of the application in cloud environment.
We showed how we generated the application, presented the features, advantages, and
safety standards for the data. Finally, we re ect on the results, according to the theoretical
framework made in the rst part and platform development.
We think that the work obtained is positive and that ts the goals we set ourselves
to achieve. This research has some limitations, we believe that the time for completion was scarce and the implementation of the platform could bene t from the implementation of other features.In future research it would be appropriate to continue the project expanding the capabilities
of the application, test the operation with other users and make comparative tests.A Computação em nuvem é um paradigma ainda com muitas áreas por explorar que
vão desde a componente tecnológica à definição de novos modelos de negócio, mas que está
a revolucionar a forma como projetamos, implementamos e gerimos toda a infraestrutura da
tecnologia da informação.
A Infraestrutura como Serviço representa a disponibilização da infraestrutura computacional,
tipicamente um datacenter virtual, juntamente com um conjunto de APls que permitirá
que aplicações, de forma automática, possam controlar os recursos que pretendem utilizar_ A
escolha do fornecedor de serviços e a forma como este aplica o seu modelo de negócio poderão
determinar um maior ou menor custo na operacionalização e manutenção das aplicações junto
dos fornecedores.
Neste sentido, esta dissertação propôs· se efetuar uma revisão bibliográfica sobre a
temática da Computação em nuvem, a transmissão e o armazenamento seguro de conteúdos
multimédia, utilizando a compressão sem perdas, em ambientes em nuvem públicos, e implementar
um sistema deste tipo através da construção de uma aplicação que faz a gestão dos
dados em ambientes de nuvem pública (dropbox e meocloud).
Foi construída uma aplicação no decorrer desta dissertação que vai de encontro aos objectivos
definidos. Este sistema fornece ao utilizador uma variada gama de funções de gestão
de dados em ambientes de nuvem pública, para isso o utilizador tem apenas que realizar o login
no sistema com as suas credenciais, após a realização de login, através do protocolo Oauth 1.0
(protocolo de autorização) é gerado um token de acesso, este token só é gerado com o consentimento
do utilizador e permite que a aplicação tenha acesso aos dados / ficheiros do utilizador
~em que seja necessário utilizar as credenciais. Com este token a aplicação pode agora operar e
disponibilizar todo o potencial das suas funções. Com esta aplicação é também disponibilizado
ao utilizador funções de compressão e encriptação de modo a que possa usufruir ao máximo
do seu sistema de armazenamento cloud com segurança. A função de compressão funciona
utilizando o algoritmo de compressão LZMA sendo apenas necessário que o utilizador escolha os
ficheiros a comprimir. Relativamente à cifragem utilizamos o algoritmo AES (Advanced Encryption
Standard) que funciona com uma chave simétrica de 128bits definida pelo utilizador.
Alicerçámos a investigação em duas partes distintas e complementares: a primeira parte
é composta pela fundamentação teórica e a segunda parte consiste no desenvolvimento da aplicação
informática em que os dados são geridos, comprimidos, armazenados, transmitidos em
vários ambientes de computação em nuvem. A fundamentação teórica encontra-se organizada
em dois capítulos, o capítulo 2 - "Background on Cloud Storage" e o capítulo 3 "Data Compression",
Procurámos, através da fundamentação teórica, demonstrar a pertinência da investigação. transmitir algumas das teorias pertinentes e introduzir, sempre que possível, investigações
existentes na área. A segunda parte do trabalho foi dedicada ao desenvolvimento da
aplicação em ambiente "cloud". Evidenciámos o modo como gerámos a aplicação, apresentámos
as funcionalidades, as vantagens. Por fim, refletimos sobre os resultados , de acordo com o
enquadramento teórico efetuado na primeira parte e o desenvolvimento da plataforma.
Pensamos que o trabalho obtido é positivo e que se enquadra nos objetivos que nos propusemos
atingir. Este trabalho de investigação apresenta algumas limitações, consideramos que
o tempo para a sua execução foi escasso e a implementação da plataforma poderia beneficiar
com a implementação de outras funcionalidades. Em investigações futuras seria pertinente dar continuidade ao projeto ampliando as potencialidades da aplicação, testar o funcionamento
com outros utilizadores e efetuar testes comparativos.Fundação para a Ciência e a Tecnologia (FCT
Internet of Things From Hype to Reality
The Internet of Things (IoT) has gained significant mindshare, let alone attention, in academia and the industry especially over the past few years. The reasons behind this interest are the potential capabilities that IoT promises to offer. On the personal level, it paints a picture of a future world where all the things in our ambient environment are connected to the Internet and seamlessly communicate with each other to operate intelligently. The ultimate goal is to enable objects around us to efficiently sense our surroundings, inexpensively communicate, and ultimately create a better environment for us: one where everyday objects act based on what we need and like without explicit instructions
Orbital transfer vehicle concept definition and system analysis study. Volume 2: OTV concept definition and evaluation. Book 1: Mission and system requirements
The mission and system requirements for the concept definition and system analysis of the Orbital Transfer Vehicle (OTV) are established. The requirements set forth constitute the single authority for the selection, evaluation, and optimization of the technical performance and design of the OTV. This requirements document forms the basis for the Ground and Space Based OTV concept definition analyses and establishes the physical, functional, performance and design relationships to STS, Space Station, Orbital Maneuvering Vehicle (OMV), and payloads
Educational Data Analytics for Teachers and School Leaders
Educational Data Analytics (EDA) have been attributed with significant benefits for enhancing on-demand personalized educational support of individual learners as well as reflective course (re)design for achieving more authentic teaching, learning and assessment experiences integrated into real work-oriented tasks. This open access textbook is a tutorial for developing, practicing and self-assessing core competences on educational data analytics for digital teaching and learning. It combines theoretical knowledge on core issues related to collecting, analyzing, interpreting and using educational data, including ethics and privacy concerns. The textbook provides questions and teaching materials/ learning activities as quiz tests of multiple types of questions, added after each section, related to the topic studied or the video(s) referenced. These activities reproduce real-life contexts by using a suitable use case scenario (storytelling), encouraging learners to link theory with practice; self-assessed assignments enabling learners to apply their attained knowledge and acquired competences on EDL. By studying this book, you will know where to locate useful educational data in different sources and understand their limitations; know the basics for managing educational data to make them useful; understand relevant methods; and be able to use relevant tools; know the basics for organising, analysing, interpreting and presenting learner-generated data within their learning context, understand relevant learning analytics methods and be able to use relevant learning analytics tools; know the basics for analysing and interpreting educational data to facilitate educational decision making, including course and curricula design, understand relevant teaching analytics methods and be able to use relevant teaching analytics tools; understand issues related with educational data ethics and privacy. This book is intended for school leaders and teachers engaged in blended (using the flipped classroom model) and online (during COVID-19 crisis and beyond) teaching and learning; e-learning professionals (such as, instructional designers and e-tutors) of online and blended courses; instructional technologists; researchers as well as undergraduate and postgraduate university students studying education, educational technology and relevant fields
Educational Data Analytics for Teachers and School Leaders
Educational Data Analytics (EDA) have been attributed with significant benefits for enhancing on-demand personalized educational support of individual learners as well as reflective course (re)design for achieving more authentic teaching, learning and assessment experiences integrated into real work-oriented tasks. This open access textbook is a tutorial for developing, practicing and self-assessing core competences on educational data analytics for digital teaching and learning. It combines theoretical knowledge on core issues related to collecting, analyzing, interpreting and using educational data, including ethics and privacy concerns. The textbook provides questions and teaching materials/ learning activities as quiz tests of multiple types of questions, added after each section, related to the topic studied or the video(s) referenced. These activities reproduce real-life contexts by using a suitable use case scenario (storytelling), encouraging learners to link theory with practice; self-assessed assignments enabling learners to apply their attained knowledge and acquired competences on EDL. By studying this book, you will know where to locate useful educational data in different sources and understand their limitations; know the basics for managing educational data to make them useful; understand relevant methods; and be able to use relevant tools; know the basics for organising, analysing, interpreting and presenting learner-generated data within their learning context, understand relevant learning analytics methods and be able to use relevant learning analytics tools; know the basics for analysing and interpreting educational data to facilitate educational decision making, including course and curricula design, understand relevant teaching analytics methods and be able to use relevant teaching analytics tools; understand issues related with educational data ethics and privacy. This book is intended for school leaders and teachers engaged in blended (using the flipped classroom model) and online (during COVID-19 crisis and beyond) teaching and learning; e-learning professionals (such as, instructional designers and e-tutors) of online and blended courses; instructional technologists; researchers as well as undergraduate and postgraduate university students studying education, educational technology and relevant fields
A shared-disk parallel cluster file system
Dissertação apresentada para obtenção do Grau de Doutor em Informática Pela Universidade Nova de Lisboa, Faculdade de Ciências e TecnologiaToday, clusters are the de facto cost effective platform both for high performance
computing (HPC) as well as IT environments. HPC and IT are quite different environments
and differences include, among others, their choices on file systems and storage: HPC favours parallel file systems geared towards maximum I/O bandwidth, but which are not fully POSIX-compliant and were devised to run on top of (fault prone) partitioned storage; conversely, IT data centres favour both external disk arrays (to provide highly available storage) and POSIX compliant file systems, (either general purpose or shared-disk cluster file systems, CFSs).
These specialised file systems do perform very well in their target environments provided that applications do not require some lateral features, e.g., no file locking on parallel file systems, and no high performance writes over cluster-wide shared files on CFSs. In brief, we can say
that none of the above approaches solves the problem of providing high levels of reliability and performance to both worlds.
Our pCFS proposal makes a contribution to change this situation: the rationale is to take advantage on the best of both – the reliability of cluster file systems and the high performance of parallel file systems. We don’t claim to provide the absolute best of each, but we aim at full POSIX compliance, a rich feature set, and levels of reliability and performance good enough
for broad usage – e.g., traditional as well as HPC applications, support of clustered DBMS engines that may run over regular files, and video streaming. pCFS’ main ideas include:
· Cooperative caching, a technique that has been used in file systems for distributed disks but, as far as we know, was never used either in SAN based cluster file systems or in parallel file systems. As a result, pCFS may use all infrastructures (LAN and SAN) to move data.
· Fine-grain locking, whereby processes running across distinct nodes may define nonoverlapping byte-range regions in a file (instead of the whole file) and access them in parallel, reading and writing over those regions at the infrastructure’s full speed (provided that no major metadata changes are required).
A prototype was built on top of GFS (a Red Hat shared disk CFS): GFS’ kernel code was
slightly modified, and two kernel modules and a user-level daemon were added. In the
prototype, fine grain locking is fully implemented and a cluster-wide coherent cache is maintained through data (page fragments) movement over the LAN.
Our benchmarks for non-overlapping writers over a single file shared among processes
running on different nodes show that pCFS’ bandwidth is 2 times greater than NFS’ while
being comparable to that of the Parallel Virtual File System (PVFS), both requiring about 10 times more CPU. And pCFS’ bandwidth also surpasses GFS’ (600 times for small record sizes, e.g., 4 KB, decreasing down to 2 times for large record sizes, e.g., 4 MB), at about the same CPU usage.Lusitania, Companhia de Seguros S.A, Programa
IBM Shared University Research (SUR
Atomic Transfer for Distributed Systems
Building applications and information systems increasingly means dealing with concurrency and faults stemming from distribution of system components. Atomic transactions are a well-known method for transferring the responsibility for handling concurrency and faults from developers to the software\u27s execution environment, but incur considerable execution overhead. This dissertation investigates methods that shift some of the burden of concurrency control into the network layer, to reduce response times and increase throughput. It anticipates future programmable network devices, enabling customized high-performance network protocols.
We propose Atomic Transfer (AT), a distributed algorithm to prevent race conditions due to messages crossing on a path of network switches. Switches check request messages for conflicts with response messages traveling in the opposite direction. Conflicting requests are dropped, obviating the request\u27s receiving host from detecting and handling the conflict. AT is designed to perform well under high data contention, as concurrency control effort is balanced across a network instead of being handled by the contended endpoint hosts themselves.
We use AT as the basis for a new optimistic transactional cache consistency algorithm, supporting execution of atomic applications caching shared data. We then present a scalable refinement, allowing hierarchical consistent caches with predictable performance despite high data update rates.
We give detailed I/O Automata models of our algorithms along with correctness proofs. We begin with a simplified model, assuming static network paths and no message loss, and then refine it to support dynamic network paths and safe handling of message loss.
We present a trie-based data structure for accelerating conflict-checking on switches, with benchmarks suggesting the feasibility of our approach from a performance stand-point
Recommended from our members
AIRM: a new AI Recruiting Model for the Saudi Arabian labour market
One of the goals of Saudi Vision 2030 is to keep the unemployment rate at the lowest level to empower the economy. Prior research has shown that an increase in unemployment has a negative effect on a country’s Gross Domestic Product. This research aims to utilise cutting-edge technology such as Data Lake (DL), Machine Learning (ML) and Artificial Intelligence (AI) to assist the Saudi labour market bymatching job seekers with vacant positions. Currently, human experts carry out this process; however, this is time consuming and labour intensive. Moreover, in the Saudi labour market, this process does not use a cohesive data centre to monitor, integrate, or analyse labour market data, resulting in inefficiencies, such as bias and latency. These inefficiencies arise from a lack of technologies and, more importantly, from having an open labour market without a national labour market data centre. This research proposes a new AI Recruiting Model (AIRM) architecture that exploits DLs, ML and AI to rapidly and efficiently match job seekers to vacant positions in the Saudi labour market. A Minimum Viable Product (MVP) is employed to test the proposed AIRM architecture using a labour market dataset simulation corpus for training purposes; the architecture is further evaluated against three research-collaborative Human Resources (HR) professionals. As this research is data-driven in nature, it requires collaboration from domain experts. The first layer of the AIRM architecture uses balanced iterative reducing and clustering using hierarchies (BIRCH) as a clustering algorithm for the initial screening layer. The mapping layer uses sentence transformers with a robustly optimised BERTt pre-training approach (RoBERTa) as the base model, and ranking is carried out using the Facebook AI Similarity Search (FAISS). Finally, the preferences layer takes the user’s preferences as a list and sorts the results using the pre-trained cross-encoders model, considering the weight of the more important words. This new AIRM has yielded favourable outcomes: This research considered accepting an AIRM selection ratified by at least one HR expert to account for the subjective character of the selection process when exclusively handled by human HR experts. The research evaluated the AIRM using two metrics: accuracy and time. The AIRM had an overall matching accuracy of 84%, with at least one expert agreeing with the system’s output. Furthermore, it completed the task in 2.4 minutes, whereas human experts took more than six days on average. Overall, the AIRM outperforms humans in task execution, making it useful in pre-selecting a group of applicants and positions. The AIRM is not limited to government services. It can also help any commercial business that uses Big Data
- …