4,520 research outputs found

    Ionospheric effects on one-way timing signals

    Get PDF
    A proposed navigation concept requires that a user measure the time-delay that satellite-emitted signals experience in traversing the distance between satellite and user. Simultaneous measurement of the propagation time from four different satellites permits the user to determine his position and clock bias if satellite ephemerides and signal propagation velocity are known. A pulse propagating through the ionosphere is slowed down somewhat, giving an apparent range that is larger than the equivalent free space range. The difference between the apparent range and the true range, or the free space velocity and the true velocity, is the quantity of interest. This quantity is directly proportional to the total electron content along the path of the propagating signal. Thus, if the total electron content is known, or is measured, a perfect correction to ranging could be performed. Faraday polarization measurements are continuously being taken at Fort Monmouth, N. J., using beacon emissions of the ATS-3 (137.35 MHz) satellite. Day-to-day variability of the diurnal variation of total electron content values is present with differences of up to 50% or more not being uncommon. In addition, superposed on the overall diurnal variation are smaller scale variations of approximately 5 to 10% of the total content which are attributed to ionospheric density irregularities

    Plasmaspheric effects on one way satellite timing signals

    Get PDF
    The effects of the ionospheric retardation of satellite-emitted timing signals was presented. The retardation at the navigation frequencies, which is proportional to the total ionospheric electron content (TEC), was determined by Faraday polarization measurements of VHF emissions of a geostationary satellite. The polarization data yielded TEC up to approximately 1200 km only, since the measurement technique is based on the Faraday effect which is weighted by the terrestrial magnetic field

    The design and implementation of the Technical Facilities Controller (TFC) for the Goldstone deep space communications complex

    Get PDF
    The Technical Facilities Controller is a microprocessor-based energy management system that is to be implemented in the Deep Space Network facilities. This system is used in conjunction with facilities equipment at each of the complexes in the operation and maintenance of air-conditioning equipment, power generation equipment, power distribution equipment, and other primary facilities equipment. The implementation of the Technical Facilities Controller was completed at the Goldstone Deep Space Communications Complex and is now operational. The installation completed at the Goldstone Complex is described and the utilization of the Technical Facilities Controller is evaluated. The findings will be used in the decision to implement a similar system at the overseas complexes at Canberra, Australia, and Madrid, Spain

    Constraint Programming for Scheduling

    Get PDF
    Our goal is to introduce the constraint programming (CP) approach within the context of scheduling. We start with an introduction to CP and its distinct technical vocabulary. We then present and illustrate a general algorithm for solving a CP problem with a simple scheduling example. Next, we review several published studies where CP has been used in scheduling problems so as to provide a feel for its applicability. We discuss the advantages of CP in modeling and solving certain types of scheduling problems. We then provide an illustration of the use of a commercial CP tool (OPL Studio) in modeling and designing a solution procedure for a classic problem in scheduling. We conclude with our speculations about the future of scheduling research using this approach

    Positive reform of tuna farm diving in South Australia in response to government intervention

    Get PDF
    © 2001 by Occupational and Environmental MedicineObjectives: Much of the tuna harvested in South Australia since 1990 has involved "farming" techniques requiring the use of divers. From 1993 to 1995, 17 divers from this industry were treated for decompression illness (DCI). In response, the State Government introduced corrective strategies. A decrease in the number of divers presenting for treatment was subsequently recorded. Consequently, the hypothesis was tested that the government intervention resulted in a decrease in the incidence of DCI in the industry and an improved clinical outcome of divers with DCI. Methods: The incidence of treated DCI in tuna farm divers was estimated from the number of divers with DCI treated and the number of dives undertaken extrapolated from a survey of the industry in 1997-8. General health was measured in the tuna farm diving population by a valid and reliable self assessment questionnaire. The outcome of the divers treated for DCI was analysed with a modified clinical severity scoring system. Results: The apparent incidence of treated DCI has decreased in tuna farm divers since the government intervention. The evidence supports a truly decreased incidence rather than underreporting. The general health of the tuna farm divers was skewed towards the asymptomatic end of the range, although health scores indicative of DCI were reported after 1.7% of the dives that did not result in recognised DCI. The clinical outcome of the divers treated since the intervention has improved, possibly because of earlier recognition of the disease and hence less time spent diving while having DCI. Conclusions: The government intervention in the tuna industry in South Australia has resulted in a reduced incidence of DCI in the industry

    Turbulent and Transitional Modeling of Drag on Oceanographic Measurement Devices

    Get PDF
    Computational fluid dynamic techniques have been applied to the determination of drag on oceanographic devices (expendable bathythermographs). Such devices, which are used to monitor changes in ocean heat content, provide information that is dependent on their drag coefficient. Inaccuracies in drag calculations can impact the estimation of ocean heating associated with global warming. Traditionally, ocean-heating information was based on experimental correlations which related the depth of the device to the fall time. The relation of time-depth is provided by a fall-rate equation (FRE). It is known that FRE depths are reasonably accurate for ocean environments that match the experiments from which the correlations were developed. For other situations, use of the FRE may lead to depth errors that preclude XBTs as accurate oceanographic devices. Here, a CFD approach has been taken which provides drag coefficients that are used to predict depths independent of an FRE

    A computational method for determining XBT depths

    Get PDF
    Abstract. A new technique for determining the depth of expendable bathythermographs (XBTs) is developed. This new method uses a forward-stepping calculation which incorporates all of the forces on the XBT devices during their descent. Of particular note are drag forces which are calculated using a new drag coefficient expression. That expression, obtained entirely from computational fluid dynamic modeling, accounts for local variations in the ocean environment. Consequently, the method allows for accurate determination of depths for any local temperature environment. The results, which are entirely based on numerical simulation, are compared with the experiments of LM Sippican T-5 XBT probes. It is found that the calculated depths differ by less than 3% from depth estimates using the standard fall-rate equation (FRE). Furthermore, the differences decrease with depth. The computational model allows an investigation of the fluid flow patterns along the outer surface of the probe as well as in the interior channel. The simulations take account of complex flow phenomena such as laminar-turbulent transition and flow separation

    IS Journal Quality Assessment Using the Author Affiliation Index

    Get PDF
    Research productivity is one means by which academic units attain legitimacy within their institutional milieu and make their case for resources. Journal quality assessment is an important component for assessing faculty research productivity. We introduce the Author Affiliation Index (AAI), a simple method for assessing journal quality, to the IS domain. Essentially, the AAI of a journal is the percentage of academic authors publishing in that journal who are affiliated with a base set of high-quality academic institutions. Besides explaining the AAI, we demonstrate its use with a set of well-known IS journals, discuss its rankings vis-à-vis those resulting from other methods, and provide an example of how the basic AAI approach can be modified by changing the base school set that is used to define journal quality. The AAI has a number of advantages. First, it is a simple, low cost and transparent method for assessing any journal given a base school set. Second, it provides a consistent ranking of journals, particularly of those beyond the top consensus journals where less consistency is achieved with other measures. Third, it enables new journals to be rapidly assessed against more established ones without the lags or costs of other measures. The AAI provides another indicator of journal quality that is different from surveys and citation analyses

    Combining checkpointing and data compression for large scale seismic inversion

    Get PDF
    Seismic inversion and imaging are adjoint-based optimization problems that processes up to terabytes of data, regularly exceeding the memory capacity of available computers. Data compression is an effective strategy to reduce this memory requirement by a certain factor, particularly if some loss in accuracy is acceptable. A popular alternative is checkpointing, where data is stored at selected points in time, and values at other times are recomputed as needed from the last stored state. This allows arbitrarily large adjoint computations with limited memory, at the cost of additional recomputations. In this paper we combine compression and checkpointing for the first time to compute a realistic seismic inversion. The combination of checkpointing and compression allows larger adjoint computations compared to using only compression, and reduces the recomputation overhead significantly compared to using only checkpointing
    corecore