116,607 research outputs found
The Case for a Factored Operating System (fos)
The next decade will afford us computer chips with 1,000 - 10,000 cores on a single piece of silicon. Contemporary operating systems have been designed to operate on a single core or small number of cores and hence are not well suited to manage and provide operating system services at such large scale. Managing 10,000 cores is so fundamentally different from managing two cores that the traditional evolutionary approach of operating system optimization will cease to work. The fundamental design of operating systems and operating system data structures must be rethought. This work begins by documenting the scalability problems of contemporary operating systems. These studies are used to motivate the design of a factored operating system (fos). fos is a new operating system targeting 1000+ core multicore systems where space sharing replaces traditional time sharing to increase scalability. fos is built as a collection of Internet inspired services. Each operating system service is factored into a fleet of communicating servers which in aggregate implement a system service. These servers are designed much in the way that distributed Internet services are designed, but instead of providing high level Internet services, these servers provide traditional kernel services and manage traditional kernel data structures in a factored, spatially distributed manner. The servers are bound to distinct processing cores and by doing so do not fight with end user applications for implicit resources such as TLBs and caches. Also, spatial distribution of these OS services facilitates locality as many operations only need to communicate with the nearest server for a given service
PANGAEA information system for glaciological data management
Specific parameters determined on cores from continental ice sheets or glaciers can be used to reconstruct former climate. To use this scientific resource effectively an information system is needed which guarantees consistent longtime storage of data and provides easy access for the scientific community.An information system to archive any data of paleoclimatic relevance, together with the related metadata, raw data and evaluated paleoclimatic data, is presented. The system, based on a relational database, provides standardized import and export routines, easy access with uniform retrieval functions, and tools for the visualization of the data. The network is designed as a client/server system providing access through the Internet with proprietary client software including a high functionality or read-only access on published data via the World Wide Web
Correlated Resource Models of Internet End Hosts
Understanding and modelling resources of Internet end hosts is essential for
the design of desktop software and Internet-distributed applications. In this
paper we develop a correlated resource model of Internet end hosts based on
real trace data taken from the SETI@home project. This data covers a 5-year
period with statistics for 2.7 million hosts. The resource model is based on
statistical analysis of host computational power, memory, and storage as well
as how these resources change over time and the correlations between them. We
find that resources with few discrete values (core count, memory) are well
modeled by exponential laws governing the change of relative resource
quantities over time. Resources with a continuous range of values are well
modeled with either correlated normal distributions (processor speed for
integer operations and floating point operations) or log-normal distributions
(available disk space). We validate and show the utility of the models by
applying them to a resource allocation problem for Internet-distributed
applications, and demonstrate their value over other models. We also make our
trace data and tool for automatically generating realistic Internet end hosts
publicly available
K-core decomposition of Internet graphs: hierarchies, self-similarity and measurement biases
We consider the -core decomposition of network models and Internet graphs
at the autonomous system (AS) level. The -core analysis allows to
characterize networks beyond the degree distribution and uncover structural
properties and hierarchies due to the specific architecture of the system. We
compare the -core structure obtained for AS graphs with those of several
network models and discuss the differences and similarities with the real
Internet architecture. The presence of biases and the incompleteness of the
real maps are discussed and their effect on the -core analysis is assessed
with numerical experiments simulating biased exploration on a wide range of
network models. We find that the -core analysis provides an interesting
characterization of the fluctuations and incompleteness of maps as well as
information helping to discriminate the original underlying structure
k-core decomposition: a tool for the visualization of large scale networks
We use the k-core decomposition to visualize large scale complex networks in
two dimensions. This decomposition, based on a recursive pruning of the least
connected vertices, allows to disentangle the hierarchical structure of
networks by progressively focusing on their central cores. By using this
strategy we develop a general visualization algorithm that can be used to
compare the structural properties of various networks and highlight their
hierarchical structure. The low computational complexity of the algorithm,
O(n+e), where 'n' is the size of the network, and 'e' is the number of edges,
makes it suitable for the visualization of very large sparse networks. We apply
the proposed visualization tool to several real and synthetic graphs, showing
its utility in finding specific structural fingerprints of computer generated
and real world networks
k-core organization of complex networks
We analytically describe the architecture of randomly damaged uncorrelated
networks as a set of successively enclosed substructures -- k-cores. The k-core
is the largest subgraph where vertices have at least k interconnections. We
find the structure of k-cores, their sizes, and their birth points -- the
bootstrap percolation thresholds. We show that in networks with a finite mean
number z_2 of the second-nearest neighbors, the emergence of a k-core is a
hybrid phase transition. In contrast, if z_2 diverges, the networks contain an
infinite sequence of k-cores which are ultra-robust against random damage.Comment: 5 pages, 3 figure
Internet of Things Cloud: Architecture and Implementation
The Internet of Things (IoT), which enables common objects to be intelligent
and interactive, is considered the next evolution of the Internet. Its
pervasiveness and abilities to collect and analyze data which can be converted
into information have motivated a plethora of IoT applications. For the
successful deployment and management of these applications, cloud computing
techniques are indispensable since they provide high computational capabilities
as well as large storage capacity. This paper aims at providing insights about
the architecture, implementation and performance of the IoT cloud. Several
potential application scenarios of IoT cloud are studied, and an architecture
is discussed regarding the functionality of each component. Moreover, the
implementation details of the IoT cloud are presented along with the services
that it offers. The main contributions of this paper lie in the combination of
the Hypertext Transfer Protocol (HTTP) and Message Queuing Telemetry Transport
(MQTT) servers to offer IoT services in the architecture of the IoT cloud with
various techniques to guarantee high performance. Finally, experimental results
are given in order to demonstrate the service capabilities of the IoT cloud
under certain conditions.Comment: 19pages, 4figures, IEEE Communications Magazin
Recommended from our members
Design and Implementation of a High Performance Network Processor with Dynamic Workload Management
Internet plays a crucial part in today\u27s world. Be it personal communication, business transactions or social networking, internet is used everywhere and hence the speed of the communication infrastructure plays an important role. As the number of users increase the network usage increases i.e., the network data rates ramped up from a few Mb/s to Gb/s in less than a decade. Hence the network infrastructure needed a major upgrade to be able to support such high data rates. Technological advancements have enabled the communication links like optical fibres to support these high bandwidths, but the processing speed at the nodes remained constant. This created a need for specialised devices for packet processing in order to match the increasing line rates which led to emergence of network processors. Network processors were both programmable and flexible. To support the growing number of internet applications, a single core network processor has transformed into a multi/many core network processor with multiple cores on a single chip rather than just one core. This improved the packet processing speeds and hence the performance of a network node. Multi-core network processors catered to the needs of a high bandwidth networks by exploiting the inherent packet-level parallelism in a network. But these processors still had intrinsic challenges like load balancing. In order to maximise throughput of these multi-core network processors, it is important to distribute the traffic evenly across all the cores. This thesis describes a multi-core network processor with dynamic workload management. A multi-core network processor, which performs multiple applications is designed to act as a test bed for an effective workload management algorithm. An effective workload management algorithm is designed in order to distribute the workload evenly across all the available cores and hence maximise the performance of the network processor. Runtime statistics of all the cores were collected and updated at run time to aid in deciding the application to be performed on a core to to enable even distribution of workload among the cores. Hence, when an overloading of a core is detected, the applications to be performed on the cores are re-assigned. For testing purposes, we built a flexible and a reusable platform on NetFPGA 10G board which uses a FPGA-based approach to prototyping network devices. The performance of the designed workload management algorithm is tested by measuring the throughput of the system for varying workloads
- …