46,078 research outputs found
Deep Space Network information system architecture study
The purpose of this article is to describe an architecture for the Deep Space Network (DSN) information system in the years 2000-2010 and to provide guidelines for its evolution during the 1990s. The study scope is defined to be from the front-end areas at the antennas to the end users (spacecraft teams, principal investigators, archival storage systems, and non-NASA partners). The architectural vision provides guidance for major DSN implementation efforts during the next decade. A strong motivation for the study is an expected dramatic improvement in information-systems technologies, such as the following: computer processing, automation technology (including knowledge-based systems), networking and data transport, software and hardware engineering, and human-interface technology. The proposed Ground Information System has the following major features: unified architecture from the front-end area to the end user; open-systems standards to achieve interoperability; DSN production of level 0 data; delivery of level 0 data from the Deep Space Communications Complex, if desired; dedicated telemetry processors for each receiver; security against unauthorized access and errors; and highly automated monitor and control
Managing Service-Heterogeneity using Osmotic Computing
Computational resource provisioning that is closer to a user is becoming
increasingly important, with a rise in the number of devices making continuous
service requests and with the significant recent take up of latency-sensitive
applications, such as streaming and real-time data processing. Fog computing
provides a solution to such types of applications by bridging the gap between
the user and public/private cloud infrastructure via the inclusion of a "fog"
layer. Such approach is capable of reducing the overall processing latency, but
the issues of redundancy, cost-effectiveness in utilizing such computing
infrastructure and handling services on the basis of a difference in their
characteristics remain. This difference in characteristics of services because
of variations in the requirement of computational resources and processes is
termed as service heterogeneity. A potential solution to these issues is the
use of Osmotic Computing -- a recently introduced paradigm that allows division
of services on the basis of their resource usage, based on parameters such as
energy, load, processing time on a data center vs. a network edge resource.
Service provisioning can then be divided across different layers of a
computational infrastructure, from edge devices, in-transit nodes, and a data
center, and supported through an Osmotic software layer. In this paper, a
fitness-based Osmosis algorithm is proposed to provide support for osmotic
computing by making more effective use of existing Fog server resources. The
proposed approach is capable of efficiently distributing and allocating
services by following the principle of osmosis. The results are presented using
numerical simulations demonstrating gains in terms of lower allocation time and
a higher probability of services being handled with high resource utilization.Comment: 7 pages, 4 Figures, International Conference on Communication,
Management and Information Technology (ICCMIT 2017), At Warsaw, Poland, 3-5
April 2017, http://www.iccmit.net/ (Best Paper Award
Joint Energy Efficient and QoS-aware Path Allocation and VNF Placement for Service Function Chaining
Service Function Chaining (SFC) allows the forwarding of a traffic flow along
a chain of Virtual Network Functions (VNFs, e.g., IDS, firewall, and NAT).
Software Defined Networking (SDN) solutions can be used to support SFC reducing
the management complexity and the operational costs. One of the most critical
issues for the service and network providers is the reduction of energy
consumption, which should be achieved without impact to the quality of
services. In this paper, we propose a novel resource (re)allocation
architecture which enables energy-aware SFC for SDN-based networks. To this
end, we model the problems of VNF placement, allocation of VNFs to flows, and
flow routing as optimization problems. Thereafter, heuristic algorithms are
proposed for the different optimization problems, in order find near-optimal
solutions in acceptable times. The performance of the proposed algorithms are
numerically evaluated over a real-world topology and various network traffic
patterns. The results confirm that the proposed heuristic algorithms provide
near optimal solutions while their execution time is applicable for real-life
networks.Comment: Extended version of submitted paper - v7 - July 201
The Design and Operation of The Keck Observatory Archive
The Infrared Processing and Analysis Center (IPAC) and the W. M. Keck
Observatory (WMKO) operate an archive for the Keck Observatory. At the end of
2013, KOA completed the ingestion of data from all eight active observatory
instruments. KOA will continue to ingest all newly obtained observations, at an
anticipated volume of 4 TB per year. The data are transmitted electronically
from WMKO to IPAC for storage and curation. Access to data is governed by a
data use policy, and approximately two-thirds of the data in the archive are
public.Comment: 12 pages, 4 figs, 4 tables. Presented at Software and
Cyberinfrastructure for Astronomy III, SPIE Astronomical Telescopes +
Instrumentation 2014. June 2014, Montreal, Canad
Single-Board-Computer Clusters for Cloudlet Computing in Internet of Things
The number of connected sensors and devices is expected to increase to billions in the near
future. However, centralised cloud-computing data centres present various challenges to meet the
requirements inherent to Internet of Things (IoT) workloads, such as low latency, high throughput
and bandwidth constraints. Edge computing is becoming the standard computing paradigm for
latency-sensitive real-time IoT workloads, since it addresses the aforementioned limitations related
to centralised cloud-computing models. Such a paradigm relies on bringing computation close to
the source of data, which presents serious operational challenges for large-scale cloud-computing
providers. In this work, we present an architecture composed of low-cost Single-Board-Computer
clusters near to data sources, and centralised cloud-computing data centres. The proposed
cost-efficient model may be employed as an alternative to fog computing to meet real-time IoT
workload requirements while keeping scalability. We include an extensive empirical analysis to
assess the suitability of single-board-computer clusters as cost-effective edge-computing micro data
centres. Additionally, we compare the proposed architecture with traditional cloudlet and cloud
architectures, and evaluate them through extensive simulation. We finally show that acquisition costs
can be drastically reduced while keeping performance levels in data-intensive IoT use cases.Ministerio de Economía y Competitividad TIN2017-82113-C2-1-RMinisterio de Economía y Competitividad RTI2018-098062-A-I00European Union’s Horizon 2020 No. 754489Science Foundation Ireland grant 13/RC/209
- …