2,612 research outputs found
Relationships Between the Performance of Time/Frequency Standards and Navigation/Communication Systems
The relationship between system performance and clock or oscillator performance is discussed. Tradeoffs discussed include: short term stability versus bandwidth requirements; frequency accuracy versus signal acquisition time; flicker of frequency and drift versus resynchronization time; frequency precision versus communications traffic volume; spectral purity versus bit error rate, and frequency standard stability versus frequency selection and adjustability. The benefits and tradeoffs of using precise frequency and time signals are various levels of precision and accuracy are emphasized
An Algorithm for Distributing Coalitional Value Calculations among Cooperating Agents
The process of forming coalitions of software agents generally requires calculating a value for every possible coalition which indicates how beneficial that coalition would be if it was formed. Now, instead of having a single agent calculate all these values (as is typically the case), it is more efficient to distribute this calculation among the agents, thus using all the computational resources available to the system and avoiding the existence of a single point of failure. Given this, we present a novel algorithm for distributing this calculation among agents in cooperative environments. Specifically, by using our algorithm, each agent is assigned some part of the calculation such that the agentsā shares are exhaustive and disjoint. Moreover, the algorithm is decentralized, requires no communication between the agents, has minimal memory requirements, and can reflect variations in the computational speeds of the agents. To evaluate the effectiveness of our algorithm, we compare it with the only other algorithm available in the literature for distributing the coalitional value calculations (due to Shehory and Kraus). This shows that for the case of 25 agents, the distribution process of our algorithm took less than 0.02% of the time, the values were calculated using 0.000006% of the memory, the calculation redundancy was reduced from 383229848 to 0, and the total number of bytes sent between the agents dropped from 1146989648 to 0 (note that for larger numbers of agents, these improvements become exponentially better)
Hydroxyl radical reactivity at the air-ice interface
Hydroxyl radicals are important oxidants in the atmosphere and in natural waters. They are also expected to be important in snow and ice, but their reactivity has not been widely studied in frozen aqueous solution. We have developed a spectroscopic probe to monitor the formation and reactions of hydroxyl radicals in situ. Hydroxyl radicals are produced in aqueous solution via the photolysis of nitrite, nitrate, and hydrogen peroxide, and react rapidly with benzene to form phenol. Similar phenol formation rates were observed in aqueous solution and bulk ice. However, no reaction was observed at air-ice interfaces, or when bulk ice samples were crushed prior to photolysis to increase their surface area. We also monitored the heterogeneous reaction between benzene present at air-water and air-ice interfaces with gas-phase OH produced from HONO photolysis. Rapid phenol formation was observed on water surfaces, but no reaction was observed at the surface of ice. Under the same conditions, we observed rapid loss of the polycyclic aromatic hydrocarbon (PAH) anthracene at air-water interfaces, but no loss was observed at air-ice interfaces. Our results suggest that the reactivity of hydroxyl radicals toward aromatic organics is similar in bulk ice samples and in aqueous solution, but is significantly suppressed in the quasi-liquid layer (QLL) that exists at air-ice interfaces
Pre-specification of statistical analysis approaches in published clinical trial protocols was inadequate
OBJECTIVES: Results from randomized trials can depend on the statistical analysis approach used. It is important to prespecify the analysis approach in the trial protocol to avoid selective reporting of analyses based on those which provide the most favourable results. We undertook a review of published trial protocols to assess how often the statistical analysis of the primary outcome was adequately prespecified. METHODS: We searched protocols of randomized trials indexed in PubMed in November 2016. We identified whether the following aspects of the statistical analysis approach for the primary outcome were adequately prespecified: (1) analysis population; (2) analysis model; (3) use of covariates; and (4) method of handling missing data. RESULTS: e identified 99 eligible protocols. Very few protocols adequately prespecified the analysis population (8/99, 8%), analysis model (27/99, 27%), covariates (40/99, 40%), or approach to handling missing data (10/99, 10%). Most protocols did not adequately predefine any of these four aspects of their statistical analysis approach (39%) or predefined only one aspect (36%). No protocols adequately predefined all four aspects of the analysis. CONCLUSION: The statistical analysis approach is rarely prespecified in published trial protocols. This may allow selective reporting of results based on different analyses
Integrable and superintegrable systems associated with multi-sums of products
We construct and study certain Liouville integrable, superintegrable, and
non-commutative integrable systems, which are associated with multi-sums of
products.Comment: 26 pages, submitted to Proceedings of the royal society
Adjusting for multiple prognostic factors in the analysis of randomised trials
Background: When multiple prognostic factors are adjusted for in the analysis of a randomised trial, it is unclear (1) whether it is necessary to account for each of the strata, formed by all combinations of the prognostic factors (stratified analysis), when randomisation has been balanced within each stratum (stratified randomisation), or whether adjusting for the main effects alone will suffice, and (2) the best method of adjustment in terms of type I error rate and power, irrespective of the randomisation method.
Methods: We used simulation to (1) determine if a stratified analysis is necessary after stratified randomisation, and (2) to compare different methods of adjustment in terms of power and type I error rate. We considered the following methods of analysis: adjusting for covariates in a regression model, adjusting for each stratum using either fixed or random effects, and Mantel-Haenszel or a stratified Cox model depending on outcome.
Results: Stratified analysis is required after stratified randomisation to maintain correct type I error rates when (a) there are strong interactions between prognostic factors, and (b) there are approximately equal number of patients in each stratum. However, simulations based on real trial data found that type I error rates were unaffected by the method of analysis (stratified vs unstratified), indicating these conditions were not met in real datasets. Comparison of different analysis methods found that with small sample sizes and a binary or time-to-event outcome, most analysis methods lead to either inflated type I error rates or a reduction in power; the lone exception was a stratified analysis using random effects for strata, which gave nominal type I error rates and adequate power.
Conclusions: It is unlikely that a stratified analysis is necessary after stratified randomisation except in extreme
scenarios. Therefore, the method of analysis (accounting for the strata, or adjusting only for the covariates) will not generally need to depend on the method of randomisation used. Most methods of analysis work well with large
sample sizes, however treating strata as random effects should be the analysis method of choice with binary or
time-to-event outcomes and a small sample size
Accounting for centre-effects in multicentre trials with a binary outcome - when, why, and how?
BACKGROUND: It is often desirable to account for centre-effects in the analysis of multicentre randomised trials, however it is unclear which analysis methods are best in trials with a binary outcome. METHODS: We compared the performance of four methods of analysis (fixed-effects models, random-effects models, generalised estimating equations (GEE), and Mantel-Haenszel) using a re-analysis of a previously reported randomised trial (MIST2) and a large simulation study. RESULTS: The re-analysis of MIST2 found that fixed-effects and Mantel-Haenszel led to many patients being dropped from the analysis due to over-stratification (up to 69% dropped for Mantel-Haenszel, and up to 33% dropped for fixed-effects). Conversely, random-effects and GEE included all patients in the analysis, however GEE did not reach convergence. Estimated treatment effects and p-values were highly variable across different analysis methods. The simulation study found that most methods of analysis performed well with a small number of centres. With a large number of centres, fixed-effects led to biased estimates and inflated type I error rates in many situations, and Mantel-Haenszel lost power compared to other analysis methods in some situations. Conversely, both random-effects and GEE gave nominal type I error rates and good power across all scenarios, and were usually as good as or better than either fixed-effects or Mantel-Haenszel. However, this was only true for GEEs with non-robust standard errors (SEs); using a robust āsandwichā estimator led to inflated type I error rates across most scenarios. CONCLUSIONS: With a small number of centres, we recommend the use of fixed-effects, random-effects, or GEE with non-robust SEs. Random-effects and GEE with non-robust SEs should be used with a moderate or large number of centres
Variation of discrete spectra for non-selfadjoint perturbations of selfadjoint operators
Let B=A+K where A is a bounded selfadjoint operator and K is an element of
the von Neumann-Schatten ideal S_p with p>1. Let {\lambda_n} denote an
enumeration of the discrete spectrum of B. We show that \sum_n
\dist(\lambda_n, \sigma(A))^p is bounded from above by a constant multiple of
|K|_p^p. We also derive a unitary analog of this estimate and apply it to
obtain new estimates on zero-sets of Cauchy transforms.Comment: Differences to previous version: Extended Introduction, new Section
5, additional references. To appear in Int. Eq. Op. Theor
Risk of selection bias in randomised trials
Background: Selection bias occurs when recruiters selectively enrol patients into the trial based on what the next treatment allocation is likely to be. This can occur even if appropriate allocation concealment is used if recruiters can guess the next treatment assignment with some degree of accuracy. This typically occurs in unblinded trials when restricted randomisation is implemented to force the number of patients in each arm or within each centre to be the same. Several methods to reduce the risk of selection bias have been suggested; however, it is unclear how often these techniques are used in practice. Methods: We performed a review of published trials which were not blinded to assess whether they utilised methods for reducing the risk of selection bias. We assessed the following techniques: (a) blinding of recruiters; (b) use of simple randomisation; (c) avoidance of stratification by site when restricted randomisation is used; (d) avoidance of permuted blocks if stratification by site is used; and (e) incorporation of prognostic covariates into the randomisation procedure when restricted randomisation is used. We included parallel group, individually randomised phase III trials published in four general medical journals (BMJ, Journal of the American Medical Association, The Lancet, and New England Journal of Medicine) in 2010. Results: We identified 152 eligible trials. Most trials (98%) provided no information on whether recruiters were blind to previous treatment allocations. Only 3% of trials used simple randomisation; 63% used some form of restricted randomisation, and 35% did not state the method of randomisation. Overall, 44% of trials were stratified by site of recruitment; 27% were not, and 29% did not report this information. Most trials that did stratify by site of recruitment used permuted blocks (58%), and only 15% reported using random block sizes. Many trials that used restricted randomisation also included prognostic covariates in the randomisation procedure (56%). Conclusions: The risk of selection bias could not be ascertained for most trials due to poor reporting. Many trials which did provide details on the randomisation procedure were at risk of selection bias due to a poorly chosen randomisation methods. Techniques to reduce the risk of selection bias should be more widely implemented
- ā¦