36,313 research outputs found
Rumba : a Python framework for automating large-scale recursive internet experiments on GENI and FIRE+
It is not easy to design and run Convolutional Neural Networks (CNNs) due to: 1) finding the optimal number of filters (i.e., the width) at each layer is tricky, given an architecture; and 2) the computational intensity of CNNs impedes the deployment on computationally limited devices. Oracle Pruning is designed to remove the unimportant filters from a well-trained CNN, which estimates the filters’ importance by ablating them in turn and evaluating the model, thus delivers high accuracy but suffers from intolerable time complexity, and requires a given resulting width but cannot automatically find it. To address these problems, we propose Approximated Oracle Filter Pruning (AOFP), which keeps searching for the least important filters in a binary search manner, makes pruning attempts by masking out filters randomly, accumulates the resulting errors, and finetunes the model via a multi-path framework. As AOFP enables simultaneous pruning on multiple layers, we can prune an existing very deep CNN with acceptable time cost, negligible accuracy drop, and no heuristic knowledge, or re-design a model which exerts higher accuracy and faster inferenc
Snow stratigraphic heterogeneity within ground-based passive microwave radiometer footprints: implications for emission modeling
Two-dimensional measurements of snowpack properties (stratigraphic layering, density, grain size and temperature) were used as inputs to the multi-layer Helsinki University of Technology (HUT) microwave emission model at a centimeter-scale horizontal resolution, across a 4.5 m transect of ground-based passive microwave radiometer footprints near Churchill, Manitoba, Canada. Snowpack stratigraphy was complex (between six and eight layers) with only three layers extending continuously throughout the length of the transect. Distributions of one-dimensional simulations, accurately representing complex stratigraphic layering, were evaluated using measured brightness temperatures. Large biases (36 to 68 K) between simulated and measured brightness temperatures were minimized (-0.5 to 0.6 K), within measurement accuracy, through application of grain scaling factors (2.6 to 5.3) at different combinations of frequencies, polarizations and model extinction coefficients. Grain scaling factors compensated for uncertainty relating optical SSA to HUT effective grain size inputs and quantified relative differences in scattering and absorption properties of various extinction coefficients. The HUT model required accurate representation of ice lenses, particularly at horizontal polarization, and large grain scaling factors highlighted the need to consider microstructure beyond the size of individual grains. As variability of extinction coefficients was strongly influenced by the proportion of large (hoar) grains in a vertical profile, it is important to consider simulations from distributions of one-dimensional profiles rather than single profiles, especially in sub-Arctic snowpacks where stratigraphic variability can be high. Model sensitivity experiments suggested the level of error in field measurements and the new methodological framework used to apply them in a snow emission model were satisfactory. Layer amalgamation showed a three-layer representation of snowpack stratigraphy reduced the bias of a one-layer representation by about 50%
Recommended from our members
Computing infrastructure issues in distributed communications systems : a survey of operating system transport system architectures
The performance of distributed applications (such as file transfer, remote login, tele-conferencing, full-motion video, and scientific visualization) is influenced by several factors that interact in complex ways. In particular, application performance is significantly affected both by communication infrastructure factors and computing infrastructure factors. Several communication infrastructure factors include channel speed, bit-error rate, and congestion at intermediate switching nodes. Computing infrastructure factors include (among other things) both protocol processing activities (such as connection management, flow control, error detection, and retransmission) and general operating system factors (such as memory latency, CPU speed, interrupt and context switching overhead, process architecture, and message buffering). Due to a several orders of magnitude increase in network channel speed and an increase in application diversity, performance bottlenecks are shifting from the network factors to the transport system factors.This paper defines an abstraction called an "Operating System Transport System Architecture" (OSTSA) that is used to classify the major components and services in the computing infrastructure. End-to-end network protocols such as TCP, TP4, VMTP, XTP, and Delta-t typically run on general-purpose computers, where they utilize various operating system resources such as processors, virtual memory, and network controllers. The OSTSA provides services that integrate these resources to support distributed applications running on local and wide area networks.A taxonomy is presented to evaluate OSTSAs in terms of their support for protocol processing activities. We use this taxonomy to compare and contrast five general-purpose commercial and experimental operating systems including System V UNIX, BSD UNIX, the x-kernel, Choices, and Xinu
RELEASE: A High-level Paradigm for Reliable Large-scale Server Software
Erlang is a functional language with a much-emulated model for building reliable distributed systems. This paper outlines the RELEASE project, and describes the progress in the rst six months. The project aim is to scale the Erlang's radical concurrency-oriented programming paradigm to build reliable general-purpose software, such as server-based systems, on massively parallel machines. Currently Erlang has inherently scalable computation and reliability models, but in practice scalability is constrained by aspects of the language and virtual machine. We are working at three levels to address these challenges: evolving the Erlang virtual machine so that it can work effectively on large scale multicore systems; evolving the language to Scalable Distributed (SD) Erlang; developing a scalable Erlang infrastructure to integrate multiple, heterogeneous clusters. We are also developing state of the art tools that allow programmers to understand the behaviour of massively parallel SD Erlang programs. We will demonstrate the e ectiveness of the RELEASE approach using demonstrators and two large case studies on a Blue Gene
Contrasting Views of Complexity and Their Implications For Network-Centric Infrastructures
There exists a widely recognized need to better understand
and manage complex “systems of systems,” ranging from
biology, ecology, and medicine to network-centric technologies.
This is motivating the search for universal laws of highly evolved
systems and driving demand for new mathematics and methods
that are consistent, integrative, and predictive. However, the theoretical
frameworks available today are not merely fragmented
but sometimes contradictory and incompatible. We argue that
complexity arises in highly evolved biological and technological
systems primarily to provide mechanisms to create robustness.
However, this complexity itself can be a source of new fragility,
leading to “robust yet fragile” tradeoffs in system design. We
focus on the role of robustness and architecture in networked
infrastructures, and we highlight recent advances in the theory
of distributed control driven by network technologies. This view
of complexity in highly organized technological and biological systems
is fundamentally different from the dominant perspective in
the mainstream sciences, which downplays function, constraints,
and tradeoffs, and tends to minimize the role of organization and
design
RELEASE: A High-level Paradigm for Reliable Large-scale Server Software
Erlang is a functional language with a much-emulated model for building reliable distributed systems. This paper outlines the RELEASE project, and describes the progress in the first six months. The project aim is to scale the Erlang’s radical concurrency-oriented programming paradigm to build reliable general-purpose software, such as server-based systems, on massively parallel machines. Currently Erlang has inherently scalable computation and reliability models, but in practice scalability is constrained by aspects of the language and virtual machine. We are working at three levels to address these challenges: evolving the Erlang virtual machine so that it can work effectively on large scale multicore systems; evolving the language to Scalable Distributed (SD) Erlang; developing a scalable Erlang infrastructure to integrate multiple, heterogeneous clusters. We are also developing state of the art tools that allow programmers to understand the behaviour of massively parallel SD Erlang programs. We will demonstrate the effectiveness of the RELEASE approach using demonstrators and two large case studies on a Blue Gene
E2XLRADR (Energy Efficient Cross Layer Routing Algorithm with Dynamic Retransmission for Wireless Sensor Networks)
The main focus of this article is to achieve prolonged network lifetime with
overall energy efficiency in wireless sensor networks through controlled
utilization of limited energy. Major percentage of energy in wireless sensor
network is consumed during routing from source to destination, retransmission
of data on packet loss. For improvement, cross layered algorithm is proposed
for routing and retransmission scheme. Simulation and results shows that this
approach can save the overall energy consumptio
Conformational Mechanics of Polymer Adsorption Transitions at Attractive Substrates
Conformational phases of a semiflexible off-lattice homopolymer model near an
attractive substrate are investigated by means of multicanonical computer
simulations. In our polymer-substrate model, nonbonded pairs of monomers as
well as monomers and the substrate interact via attractive van der Waals
forces. To characterize conformational phases of this hybrid system, we analyze
thermal fluctuations of energetic and structural quantities, as well as
adequate docking parameters. Introducing a solvent parameter related to the
strength of the surface attraction, we construct and discuss the
solubility-temperature phase diagram. Apart from the main phases of adsorbed
and desorbed conformations, we identify several other phase transitions such as
the freezing transition between energy-dominated crystalline low-temperature
structures and globular entropy-dominated conformations.Comment: 13 pages, 15 figure
LPDT2: La plissure du texte 2
This paper will discuss the artistic processes involved in the creation of the three dimensional, virtual art installation La Plissure du Texte 2, which is the sequel to Roy Ascott’s ground breaking telematically networked art work La Plissure du Texte, created in 1983 and shown in Paris at the Musée de l’Art Moderne de la Ville de Paris during that same year. While the underlying concepts of the original art work, as well as its capability of regenerating itself as an entirely novel manifestation based upon the concepts of distributed authorship, textual mobility, emergent semiosis, multiple identity, and participatory poesis will be underlined, the main focus of the text will be upon the creative strategies as well as the technological means through which the architecture was brought about in the contemporary creative environment of the metaverse.
This paper will discuss the artistic processes involved in the creation of the three dimensional, virtual art installation La Plissure du Texte 2, which is the sequel to Roy Ascott’s La Plissure du Texte. While the underlying concepts of the original art work, as well as its capability of regenerating itself as a novel manifestation will be underlined, the main focus of the text will be upon the creative strategies of LPDT2
- …