281 research outputs found

    The Airlift Capabilities Estimation Prototype: A Case Study in Model Validation

    Get PDF
    This study investigates the application of a life cycle approach to the validation of operational models. The classic waterfall life cycle from software engineering is adapted for use on mathematical models by defining four stages of model development. Each stage is discussed in detail and examples of the output from each stage are presented. In addition, techniques are investigated for applying the proposed life cycle to existing models through the recovery of life cycle stages. The methodology is applied to a linear programming model developed for planning airlift operations to demonstrate the power of the life cycle approach to validation. The results of applying each stage of the life cycle to the model are presented. As a final test, the model is used to predict the airlift capability and resource requirements for the Operation Desert Shield airlift. A comparison is made between the predictions of the model and data from the actual operation. The validated model is shown to be a better representation of the airlift planning problem. Finally, specific recommendations are made for operational use of the airlift planning model and on areas where further research is needed on both the model and the life cycle validation approach

    Using reliable multicast for caching and collaboration within the world wide web

    Get PDF
    Journal ArticleThe World Wide Web has become an important medium for information dissemination. One model for synchronized information dissemination within the Web is webcasting in which data are simultaneously distributed to multiple destinations. The Web's traditional unicast client/server communication model suffers, however, when applied to webcasting; approaches that require many clients to simultaneously fetch data from the origin server using the client/server model will likely cause server and link overload. In this paper we describe a webcast design that improves upon previous designs by leveraging application level framing (ALF) design methodology. We build upon the Scalable Reliable Multicast (SRM) framework, which is based upon ALF, to create a custom protocol to meet webcast's scalability needs. We employ the protocol in an architecture consisting of two reusable components: a webcache component and a browser control component. We have implemented our design using a new SRM library called libsrm. We present the results of a simple performance evaluation and report on lessons learned while using libsrm

    A case study in open source innovation: developing the Tidepool Platform for interoperability in type 1 diabetes management.

    Get PDF
    OBJECTIVE:Develop a device-agnostic cloud platform to host diabetes device data and catalyze an ecosystem of software innovation for type 1 diabetes (T1D) management. MATERIALS AND METHODS:An interdisciplinary team decided to establish a nonprofit company, Tidepool, and build open-source software. RESULTS:Through a user-centered design process, the authors created a software platform, the Tidepool Platform, to upload and host T1D device data in an integrated, device-agnostic fashion, as well as an application ("app"), Blip, to visualize the data. Tidepool's software utilizes the principles of modular components, modern web design including REST APIs and JavaScript, cloud computing, agile development methodology, and robust privacy and security. DISCUSSION:By consolidating the currently scattered and siloed T1D device data ecosystem into one open platform, Tidepool can improve access to the data and enable new possibilities and efficiencies in T1D clinical care and research. The Tidepool Platform decouples diabetes apps from diabetes devices, allowing software developers to build innovative apps without requiring them to design a unique back-end (e.g., database and security) or unique ways of ingesting device data. It allows people with T1D to choose to use any preferred app regardless of which device(s) they use. CONCLUSION:The authors believe that the Tidepool Platform can solve two current problems in the T1D device landscape: 1) limited access to T1D device data and 2) poor interoperability of data from different devices. If proven effective, Tidepool's open source, cloud model for health data interoperability is applicable to other healthcare use cases

    A National Health Insurance Program for the United States

    Get PDF
    The US will spend $1.79 trillion on health care in 2004, yet 44 million Americans remain uninsured. What the country needs, argues McCanne, is publicly funded universal health coverag

    Delivering Live Multimedia Streams to Mobile Hosts in a Wireless Internet with Multiple Content Aggregators

    Get PDF
    We consider the distribution of channels of live multimedia content (e.g., radio or TV broadcasts) via multiple content aggregators. In our work, an aggregator receives channels from content sources and redistributes them to a potentially large number of mobile hosts. Each aggregator can offer a channel in various configurations to cater for different wireless links, mobile hosts, and user preferences. As a result, a mobile host can generally choose from different configurations of the same channel offered by multiple alternative aggregators, which may be available through different interfaces (e.g., in a hotspot). A mobile host may need to handoff to another aggregator once it receives a channel. To prevent service disruption, a mobile host may for instance need to handoff to another aggregator when it leaves the subnets that make up its current aggregatorïżœs service area (e.g., a hotspot or a cellular network).\ud In this paper, we present the design of a system that enables (multi-homed) mobile hosts to seamlessly handoff from one aggregator to another so that they can continue to receive a channel wherever they go. We concentrate on handoffs between aggregators as a result of a mobile host crossing a subnet boundary. As part of the system, we discuss a lightweight application-level protocol that enables mobile hosts to select the aggregator that provides the ïżœbestïżœ configuration of a channel. The protocol comes into play when a mobile host begins to receive a channel and when it crosses a subnet boundary while receiving the channel. We show how our protocol can be implemented using the standard IETF session control and description protocols SIP and SDP. The implementation combines SIP and SDPïżœs offer-answer model in a novel way

    Low-complexity video coding for receiver-driven layered multicast

    Get PDF
    In recent years, the “Internet Multicast Backbone,” or MBone, has risen from a small, research curiosity to a large- scale and widely used communications infrastructure. A driving force behind this growth was the development of multipoint audio, video, and shared whiteboard conferencing applications. Because these real-time media are transmitted at a uniform rate to all of the receivers in the network, a source must either run at the bottleneck rate or overload portions of its multicast distribution tree. We overcome this limitation by moving the burden of rate adaptation from the source to the receivers with a scheme we call receiver-driven layered multicast, or RLM. In RLM, a source distributes a hierarchical signal by striping the different layers across multiple multicast groups, and receivers adjust their reception rate by simply joining and leaving multicast groups. In this paper, we describe a layered video compression algorithm which, when combined with RLM, provides a comprehensive solution for scalable multicast video transmission in heterogeneous networks. In addition to a layered representation, our coder has low complexity (admitting an effi- cient software implementation) and high loss resilience (admitting robust operation in loosely controlled environments like the Inter- net). Even with these constraints, our hybrid DCT/wavelet-based coder exhibits good compression performance. It outperforms all publicly available Internet video codecs while maintaining comparable run-time performance. We have implemented our coder in a “real” application—the UCB/LBL videoconferencing tool vic. Unlike previous work on layered video compression and transmission, we have built a fully operational system that is currently being deployed on a very large scale over the MBone

    Soft ARQ for layered streaming media

    Get PDF
    A growing and important class of traffic in the Internet is so-called `streaming media,' in which a server transmits a packetized multimedia signal to a receiver that buffers the packets for playback. This playback buffer, if adequately sized, counteracts the adverse impact of delay jitter and reordering suffered by packets as they traverse the network, and if large enough also allows lost packets to be retransmitted before their playback deadline expires. We call this framework for retransmitting lost streaming-media packets `soft ARQ' since it represents a relaxed form of Automatic Repeat reQuest (ARQ). While state-of-the-art media servers employ such strategies, no work to date has proposed an optimal strategy for delay-constrained retransmissions of streaming media-specifically, one which determines what is the optimal packet to transmit at any given point in time. In this paper, we address this issue and present a framework for streaming media retransmission based on layered media representations, in which a signal is decomposed into a discrete number of layers and each successive layer provides enhanced quality. In our approach, the source chooses between transmitting (1) newer but critical coarse information (e.g., a first approximation of the media signal) and (2) older but less important refinement reformation (e.g., added details) using a decision process that minimizes the expected signal distortion at the receiver. To arrive at the proper mix of these two extreme strategies, we derive an optimal strategy for transmitting layered data over a binary erasure channel with instantaneous feedback. To provide a quantitative performance comparison of different transmission policies, we conduct a Markov-chain analysis, which shows that the best transmission policy is time-invariant and thus does not change as the frames' layers approach their expiration times

    Victim, perpetrator, family, and incident characteristics of infant and child homicide in the United States Air Force

    Get PDF
    Objective: The present study describes factors related to fatal abuse in three age groups in the United States Air Force (USAF). Method: Records from 32 substantiated cases of fatal child abuse in the USAF were independently reviewed for 60 predefined factors. Results: Males were over-represented in young child victims (between 1 year and 4 years of age) and child victims (between 4 years and 15 years of age) but not in infant victims (between 24 hours and 1 year of age). African-American infant victims and perpetrators were over-represented. Younger victims were more likely to have been previously physically abused by the perpetrator. Perpetrators were predominantly male and the biological fathers of the victims. Infant and young child perpetrators reported childhood abuse histories, while child perpetrators reported the highest frequency of mental health contact. Victims’ families reported significant life stressors. Families of young child victims were more likely divorced, separated, or single. Incidents with infants and young children tended to occur without witnesses; incidents with child victims tended to have the victim’s sibling(s) and/or mother present. Fatal incidents were more frequent on the weekend, in the home, and initiated by some family disturbance. Conclusions: Differences among groups in factors related to infant and child homicide across age groups may assist in the development of more tailored abuse prevention efforts and may also guide future investigations
    • 

    corecore