909 research outputs found
Heterogeneous concurrent computing with exportable services
Heterogeneous concurrent computing, based on the traditional process-oriented model, is approaching its functionality and performance limits. An alternative paradigm, based on the concept of services, supporting data driven computation, and built on a lightweight process infrastructure, is proposed to enhance the functional capabilities and the operational efficiency of heterogeneous network-based concurrent computing. TPVM is an experimental prototype system supporting exportable services, thread-based computation, and remote memory operations that is built as an extension of and an enhancement to the PVM concurrent computing system. TPVM offers a significantly different computing paradigm for network-based computing, while maintaining a close resemblance to the conventional PVM model in the interest of compatibility and ease of transition Preliminary experiences have demonstrated that the TPVM framework presents a natural yet powerful concurrent programming interface, while being capable of delivering performance improvements of upto thirty percent
Security and Privacy Dimensions in Next Generation DDDAS/Infosymbiotic Systems: A Position Paper
AbstractThe omnipresent pervasiveness of personal devices will expand the applicability of the Dynamic Data Driven Application Systems (DDDAS) paradigm in innumerable ways. While every single smartphone or wearable device is potentially a sensor with powerful computing and data capabilities, privacy and security in the context of human participants must be addressed to leverage the infinite possibilities of dynamic data driven application systems. We propose a security and privacy preserving framework for next generation systems that harness the full power of the DDDAS paradigm while (1) ensuring provable privacy guarantees for sensitive data; (2) enabling field-level, intermediate, and central hierarchical feedback-driven analysis for both data volume mitigation and security; and (3) intrinsically addressing uncertainty caused either by measurement error or security-driven data perturbation. These thrusts will form the foundation for secure and private deployments of large scale hybrid participant-sensor DDDAS systems of the future
Note on the occurrence of Artemia in Sri Lanka
A brief account is given of the Artemia populations occurring in Sri Lanka with respect to inland and brackishwater aquaculture activities
\u3cem\u3eSegWay\u3c/em\u3e: A Simple Framework for Unsupervised Sleep Segmentation in Experimental EEG Recordings
Sleep analysis in animal models typically involves recording an electroencephalogram (EEG) and electromyogram (EMG) and scoring vigilance state in brief epochs of data as Wake, REM (rapid eye movement sleep) or NREM (non-REM) either manually or using a computer algorithm. Computerized methods usually estimate features from each epoch like the spectral power associated with distinctive cortical rhythms and dissect the feature space into regions associated with different states by applying thresholds, or by using supervised/unsupervised statistical classifiers; but there are some factors to consider when using them: Most classifiers require scored sample data, elaborate heuristics or computational steps not easily reproduced by the average sleep researcher, who is the targeted end user. Even when prediction is reasonably accurate, small errors can lead to large discrepancies in estimates of important sleep metrics such as the number of bouts or their duration. As we show here, besides partitioning the feature space by vigilance state, modeling transitions between the states can give more accurate scores and metrics.
An unsupervised sleep segmentation framework, “SegWay”, is demonstrated by applying the algorithm step-by-step to unlabeled EEG recordings in mice. The accuracy of sleep scoring and estimation of sleep metrics is validated against manual scores
Recommended from our members
The Growth and Limits of Arbitrage: Evidence from Short Interest
We develop a novel methodology to infer the amount of capital allocated to quantitative equity arbitrage strategies. Using this methodology, which exploits time-variation in the cross section of short interest, we document that the amount of capital devoted to value and momentum strategies has grown significantly since the late 1980s. We provide evidence that this increase in capital has resulted in lower strategy returns. However, consistent with theories of limited arbitrage, we show that strategy-level capital flows are influenced by past strategy returns as well as strategy return volatility, and that arbitrage capital is most limited during times when strategies perform best. This suggests that the growth of arbitrage capital may not completely eliminate returns to these strategies
Publishing H2O pluglets in UDDI registries
Interoperability and standards, such as Grid Services are a focus of current Grid research. The intent is to facilitate resource virtualization, and to accommodate the intrinsic heterogeneity of resources in distributed environments. It is important that new and emerging metacomputing frameworks conform to these standards, in order to ensure interoperability with other grid solutions. In particular, the H2O metacomputing system offers several benefits, including lightweight operation, user-configurability, and selectable security levels. Its applicability would be enhanced even further through support for grid services and OGSA compliance. Code deployed into the H2O execution containers is referred to as pluglets. These pluglets constitute the end points of services in H2O, services that are to be made known through publication in a registry. In this contribution, we discuss a system pluglet, referred to as OGSAPluglet, that scans H2O execution containers for available services and publishes them into one or more UDDI registries. We also discuss in detail the algorithms that manage the publication of the appropriate WSDL and GSDL documents for the registration process
Eliciting the End-to-End Behavior of SOA Applications in Clouds
Availability and performance are key issues in SOA cloud applications. Those applications can be represented as a graph spanning multiple Cloud and on-premises environments, forming a very complex computing system that supports increasing numbers and types of users, business transactions, and usage scenarios. In order to rapidly find, predict, and proactively prevent root causes of issues, such as performance degradations and runtime errors, we developed a monitoring solution which is able to elicit the end-to-end behavior of those applications. We insert lightweight components into SOA frameworks and clients thereby keeping the monitoring impact minimal. Monitoring data collected from call chains is used to assist in issues related to performance, errors and alerts, as well as business and IT transactions
Are There Too Many Safe Securities? Securitization and the Incentives for Information Production
We present a model that helps explain several past collapses of securitization markets. Originators issue too many informationally insensitive securities in good times, blunting investor incentives to become informed. The resulting endogenous scarcity of informed investors exacerbates primary market collapses in bad times. Inefficiency arises because informed investors are a public good from the perspective of originators. All originators benefit from the presence of additional informed investors in bad times, but each originator minimizes his reliance on costly informed capital in good times by issuing safe securities. Our model suggests regulations that limit the issuance of safe securities in good times
Integrating Job Parallelism in Real-Time Scheduling Theory
We investigate the global scheduling of sporadic, implicit deadline,
real-time task systems on multiprocessor platforms. We provide a task model
which integrates job parallelism. We prove that the time-complexity of the
feasibility problem of these systems is linear relatively to the number of
(sporadic) tasks for a fixed number of processors. We propose a scheduling
algorithm theoretically optimal (i.e., preemptions and migrations neglected).
Moreover, we provide an exact feasibility utilization bound. Lastly, we propose
a technique to limit the number of migrations and preemptions
Recommended from our members
Enhancing functionality and performance in the PVM network computing system
The research funded by this grant is part of an ongoing research project in heterogeneous distributed computing with the PVM system, at Emory as well as at Oak Ridge Labs and the University of Tennessee. This grant primarily supports research at Emory that continues to evolve new concepts and systems in distributed computing, but it also includes the PI`s ongoing interaction with the other groups in terms of collaborative research as well as software systems development and maintenance. We have continued our second year efforts (July 1995 - June 1996), on the same topics as during the first year, namely (a) visualization of PVM programs to complement XPVM displays; (b) I/O and generalized distributed computing in PVM; and (c) evolution of a multithreaded concurrent computing model. 12 refs
- …