18 research outputs found
Recommended from our members
A plug-and-play approach to automated data interpretation: the data interpretation module (DIM)
The Contaminant Analysis Automation (CAA) Project`s automated analysis laboratory provides a ``plug-and-play`` reusable infrastructure for many types of environmental assays. As a sample progresses through sample preparation to sample analysis and finally to data interpretation, increasing expertise and judgment are needed at each step. The Data Interpretation Module (DIM) echoes the automation`s plug-and-play philosophy as a reusable engine and architecture for handling both the uncertainty and knowledge required for interpreting contaminant sample data. This presentation describes the implementation and performance of the DIM in interpreting polychlorinated biphenyl (PCB) gas chromatogram and shows the DIM architecture`s reusability for other applications
Game Theory of Social Distancing in Response to an Epidemic
Social distancing practices are changes in behavior that prevent disease transmission by reducing contact rates between susceptible individuals and infected individuals who may transmit the disease. Social distancing practices can reduce the severity of an epidemic, but the benefits of social distancing depend on the extent to which it is used by individuals. Individuals are sometimes reluctant to pay the costs inherent in social distancing, and this can limit its effectiveness as a control measure. This paper formulates a differential-game to identify how individuals would best use social distancing and related self-protective behaviors during an epidemic. The epidemic is described by a simple, well-mixed ordinary differential equation model. We use the differential game to study potential value of social distancing as a mitigation measure by calculating the equilibrium behaviors under a variety of cost-functions. Numerical methods are used to calculate the total costs of an epidemic under equilibrium behaviors as a function of the time to mass vaccination, following epidemic identification. The key parameters in the analysis are the basic reproduction number and the baseline efficiency of social distancing. The results show that social distancing is most beneficial to individuals for basic reproduction numbers around 2. In the absence of vaccination or other intervention measures, optimal social distancing never recovers more than 30% of the cost of infection. We also show how the window of opportunity for vaccine development lengthens as the efficiency of social distancing and detection improve
Recommended from our members
HIERtalker: A default hierarchy of high order neural networks that learns to read English aloud
A new learning algorithm based on a default hierarchy of high order neural networks has been developed that is able to generalize as well as handle exceptions. It learns the ''building blocks'' or clusters of symbols in a stream that appear repeatedly and convey certain messages. The default hierarchy prevents a combinatoric explosion of rules. A simulator of such a hierarchy, HIERtalker, has been applied to the conversion of English words to phonemes. Achieved accuracy is 99% for trained words and ranges from 76% to 96% for sets of new words. 8 refs., 4 figs., 1 tab
Recommended from our members
RAWS: Collective interactions and data transfers
Most high performance scientific components or applications are implemented as parallel programs operating on physically or logically distributed data. As we consider the interaction between such components two major issues arise: (1) the definition of what exactly it means for two parallel components to interact, for example in terms of synchronization, and (2) how those components can most efficiently exchange the distributed data they operate on. Since both are common and important significant efforts have been expanded to implement them efficiently. Many of those efforts were, and still are, undertaken by applications developers (see [Cou99] for an example). Several attempts have been made to develop generic frameworks solving this problem; [FKKCSCi, KG97a, BFHM98, GKP971] have all addressed its aspects. Unfortunately, all of these solutions are limited to a set of applications that have fallen within the scope of experience of their developers, and therefore none of them have been fully successful in providing a general solution. Several factors influence the difficulty of producing a general solution. First, data redistribution depends on data representation which in applications is very often specific to an application. Therefore developing a standardized solution for distributed data transfer depends on developing a standardized data representation. Further, different systems assume different transfer logistics, such as timing of transfer, locking of data, and synchronization assumptions. Finally, the shape of abstractions in different systems depends on time and tolerance of different users. The Common Component Architecture (CCA) effort is promising with respect to addressing these challenges as it has already introduced a standardized system of interactions [AGG+99] and is in the process of defining standardized representations for distributed data. Furthermore, CCA builds on the sum of experiences of its participants. In this paper we summarize our most recent contributions to the CCA design process related to the interactions of parallel components, called collective components. We introduce the notion of a collectible port which is an extension of the CCA ports [AGG+99] and allows collective components to interact as one entity. This is a functionality not found in other existing standards of the day such as [OMG95, Ses97] and represents a significant extension of these standards. The usefulness and efficiency of similar abstractions has been shown in [KG97a, KG97b]. The abstraction described here, extends them in that it allows the programmer to define the performance/utility trade-off of his or her choice. We further describe a class of translation components, which translate between the distributed data format used by one parallel implementation, to that used by another. A well known example of such components is the MxN component which translates between data distributed on M processors to data distributed on N processors. We described its implementation in PAWS, and the supporting data structures. We also present a mechanism allowing the framework to invoke this component on the programmer's behalf whenever such translation is necessary freeing the programmer from treating collective component interactions as a special case. In doing that we introduce user-defined distributed type casts. Finally, we discuss our initial experiments in building complex translation components out of atomic functionalities. Since PAWS assumes a distributed memory model, our experiments are limited to dense rectilinear data. We describe a PAWS application to illustrate the results of this discussion
Real-world hydrologic assessment of a fully-distributed hydrological model in a parallel computing environment
A major challenge in the use of fully-distributed hydrologic models has been the lack of computational capabilities for high-resolution, long-term simulations in large river basins. In this study, we present the parallel model implementation and real-world hydrologic assessment of the Triangulated Irregular Network (TIN)-based Real-time Integrated Basin Simulator (tRIBS). Our parallelization approach is based on the decomposition of a complex watershed using the channel network as a directed graph. The resulting sub-basin partitioning divides effort among processors and handles hydrologic exchanges across boundaries. Through numerical experiments in a set of nested basins, we quantify parallel performance relative to serial runs for a range of processors, simulation complexities and lengths, and sub-basin partitioning methods, while accounting for inter-run variability on a parallel computing system. In contrast to serial simulations, the parallel model speed-up depends on the variability of hydrologic processes. Load balancing significantly improves parallel speed-up with proportionally faster runs as simulation complexity (domain resolution and channel network extent) increases. The best strategy for large river basins is to combine a balanced partitioning with an extended channel network, with potential savings through a lower TIN resolution. Based on these advances, a wider range of applications for fully-distributed hydrologic models are now possible. This is illustrated through a set of ensemble forecasts that account for precipitation uncertainty derived from a statistical downscaling model