2,029 research outputs found

    Analysis of binary spatial data by quasi-likelihood estimating equations

    Full text link
    The goal of this paper is to describe the application of quasi-likelihood estimating equations for spatially correlated binary data. In this paper, a logistic function is used to model the marginal probability of binary responses in terms of parameters of interest. With mild assumptions on the correlations, the Leonov-Shiryaev formula combined with a comparison of characteristic functions can be used to establish asymptotic normality for linear combinations of the binary responses. The consistency and asymptotic normality for quasi-likelihood estimates can then be derived. By modeling spatial correlation with a variogram, we apply these asymptotic results to test independence of two spatially correlated binary outcomes and illustrate the concepts with a well-known example based on data from Lansing Woods. The comparison of generalized estimating equations and the proposed approach is also discussed.Comment: Published at http://dx.doi.org/10.1214/009053605000000057 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Variations in the Upper Paleolithic Adaptations of North China: A Review of the Evidence and Implications for the Onset of Food Production

    Get PDF
    The Upper Paleolithic (UP) of North China has the richest archaeological data and longest history of research in the Paleolithic archaeology of China, but there is a relative lack of systematic studies addressing human adaptations. This paper explores the spatial and temporal variability of human adaptations in terms of mobility, the key variable in the adaptive systems of hunter-gatherers. We find that before the UP, little adaptive differentiation is shown in the archaeological record of North China. The early Upper Paleolithic (EUP) is distinguished by four distinctive modes of mobility and subsistence organized roughly along lines of habitat variation. These modes persisted in the Late Upper Paleolithic (LUP), underlying the widespread prevalence of microblade technology throughout North China. This pattern significantly influenced adaptive changes during the transition from the terminal Pleistocene to early Holocene. Earliest food production emerged in hilly flank habitats where EUP mobility decreased quickly and social organization was more complex. This retrospective view of UP adaptations highlights the important role that prior conditions played at the evolutionary crossroads of prehistoric North China

    Device-independent point estimation from finite data and its application to device-independent property estimation

    Full text link
    The device-independent approach to physics is one where conclusions are drawn directly from the observed correlations between measurement outcomes. In quantum information, this approach allows one to make strong statements about the properties of the underlying systems or devices solely via the observation of Bell-inequality-violating correlations. However, since one can only perform a {\em finite number} of experimental trials, statistical fluctuations necessarily accompany any estimation of these correlations. Consequently, an important gap remains between the many theoretical tools developed for the asymptotic scenario and the experimentally obtained raw data. In particular, a physical and concurrently practical way to estimate the underlying quantum distribution has so far remained elusive. Here, we show that the natural analogs of the maximum-likelihood estimation technique and the least-square-error estimation technique in the device-independent context result in point estimates of the true distribution that are physical, unique, computationally tractable and consistent. They thus serve as sound algorithmic tools allowing one to bridge the aforementioned gap. As an application, we demonstrate how such estimates of the underlying quantum distribution can be used to provide, in certain cases, trustworthy estimates of the amount of entanglement present in the measured system. In stark contrast to existing approaches to device-independent parameter estimations, our estimation does not require the prior knowledge of {\em any} Bell inequality tailored for the specific property and the specific distribution of interest.Comment: Essentially published version, but with the typo in Eq. (E5) correcte

    Naturally restricted subsets of nonsignaling correlations: typicality and convergence

    Get PDF
    It is well-known that in a Bell experiment, the observed correlation between measurement outcomes -- as predicted by quantum theory -- can be stronger than that allowed by local causality, yet not fully constrained by the principle of relativistic causality. In practice, the characterization of the set QQ of quantum correlations is carried out, often, through a converging hierarchy of outer approximations. On the other hand, some subsets of QQ arising from additional constraints [e.g., originating from quantum states having positive-partial-transposition (PPT) or being finite-dimensional maximally entangled (MES)] turn out to be also amenable to similar numerical characterizations. How, then, at a quantitative level, are all these naturally restricted subsets of nonsignaling correlations different? Here, we consider several bipartite Bell scenarios and numerically estimate their volume relative to that of the set of nonsignaling correlations. Within the number of cases investigated, we have observed that (1) for a given number of inputs nsn_s (outputs non_o), the relative volume of both the Bell-local set and the quantum set increases (decreases) rapidly with increasing non_o (nsn_s) (2) although the so-called macroscopically local set Q1Q_1 may approximate QQ well in the two-input scenarios, it can be a very poor approximation of the quantum set when ns>non_s>n_o (3) the almost-quantum set Q~1\tilde{Q}_1 is an exceptionally-good approximation to the quantum set (4) the difference between QQ and the set of correlations originating from MES is most significant when no=2n_o=2, whereas (5) the difference between the Bell-local set and the PPT set generally becomes more significant with increasing non_o. This last comparison, in particular, allows us to identify Bell scenarios where there is little hope of realizing the Bell violation by PPT states and those that deserve further exploration.Comment: v4: published version (in Quantum); v3: Substantially rewritten, main results summarized in 10 observations, 8 figures, and 7 tables; v2: Results updated; v1: 13 + 4 pages, 10 tables, 5 figures, this is [66] of arXiv:1810.00443; Comments are welcome

    Towards Zero Memory Footprint Spiking Neural Network Training

    Full text link
    Biologically-inspired Spiking Neural Networks (SNNs), processing information using discrete-time events known as spikes rather than continuous values, have garnered significant attention due to their hardware-friendly and energy-efficient characteristics. However, the training of SNNs necessitates a considerably large memory footprint, given the additional storage requirements for spikes or events, leading to a complex structure and dynamic setup. In this paper, to address memory constraint in SNN training, we introduce an innovative framework, characterized by a remarkably low memory footprint. We \textbf{(i)} design a reversible SNN node that retains a high level of accuracy. Our design is able to achieve a 58.65×\mathbf{58.65\times} reduction in memory usage compared to the current SNN node. We \textbf{(ii)} propose a unique algorithm to streamline the backpropagation process of our reversible SNN node. This significantly trims the backward Floating Point Operations Per Second (FLOPs), thereby accelerating the training process in comparison to current reversible layer backpropagation method. By using our algorithm, the training time is able to be curtailed by 23.8%\mathbf{23.8\%} relative to existing reversible layer architectures

    General approach of causal mediation analysis with causally ordered multiple mediators and survival outcome

    Get PDF
    Causal mediation analysis with multiple mediators (causal multi-mediation analysis) is critical in understanding why an intervention works, especially in medical research. Deriving the path-specific effects (PSEs) of exposure on the outcome through a certain set of mediators can detail the causal mechanism of interest. However, the existing models of causal multi-mediation analysis are usually restricted to partial decomposition, which can only evaluate the cumulative effect of several paths. Moreover, the general form of PSEs for an arbitrary number of mediators has not been proposed. In this study, we provide a generalized definition of PSE for partial decomposition (partPSE) and for complete decomposition, which are extended to the survival outcome. We apply the interventional analogues of PSE (iPSE) for complete decomposition to address the difficulty of non-identifiability. Based on Aalen’s additive hazards model and Cox’s proportional hazards model, we derive the generalized analytic forms and illustrate asymptotic property for both iPSEs and partPSEs for survival outcome. The simulation is conducted to evaluate the performance of estimation in several scenarios. We apply the new methodology to investigate the mechanism of methylation signals on mortality mediated through the expression of three nested genes among lung cancer patients

    How E-Servqual Affects Customer\u27s Online Purchase Intention?

    Get PDF
    With the boom of Internet, Internet has become one of the consumers’ online shopping channels. However, there is different in online shopping situation is because of consumers in different cultures and countries have different online shopping behavior is worth to discuss. This study is to explore the Internet users’ online shopping situation in developing country, Malaysia, and 118 questionnaire respondents were collected. Statistical analysis software SPSS 17.0 and AMOS 6.0 were used to analyze the impact on e-service quality, satisfaction, trust, and purchase intention. The model fit of this study was in an acceptable level, and this indicates that the theoretical model of this study supports the description of e-service quality for e-retailers that online shopping situation will be effected by trust and satisfaction. The result of this study will be available for those who interested in developing a transnational e-retailer as a reference, as well as academic research on cross-cultural comparative analysis

    Delivery Route Management based on Dijkstra Algorithm

    Get PDF
    بالنسبة للشركات التي تقدم خدمات التوصيل، فإن كفاءة عملية التسليم من حيث الالتزام بالمواعيد مهمة للغاية. بالإضافة إلى زيادة ثقة العملاء ، فإن الإدارة الفعالة للمسار والاختيار مطلوبة لتقليل تكاليف وقود السيارة وتسريع التسليم. لا تزال بعض الشركات الصغيرة والمتوسطة تستخدم الأساليب التقليدية لإدارة طرق التسليم. لا تستخدم قرارات إدارة جداول التسليم والمسارات أي طرق محددة لتسريع عملية تسوية التسليم. هذه العملية غير فعالة وتستغرق وقتًا طويلاً وتزيد التكاليف وتكون عرضة للأخطاء. لذلك ، تم استخدام خوارزمية Dijkstra لتحسين عملية إدارة التسليم. تم تطوير نظام إدارة التسليم لمساعدة المديرين والسائقين على جدولة طرق فعالة لتسليم طلبات المنتجات إلى المستلمين. بناءً على الاختبار ، عملت خوارزمية Dijkstra التي تم تضمينها في أقرب وظيفة بحث عن المسار لعملية التسليم بشكل جيد. من المتوقع أن يؤدي هذا النظام إلى تحسين الإدارة الفعالة وتسليم الطلبات.For businesses that provide delivery services, the efficiency of the delivery process in terms of punctuality is very important. In addition to increasing customer trust, efficient route management, and selection are required to reduce vehicle fuel costs and expedite delivery. Some small and medium businesses still use conventional methods to manage delivery routes. Decisions to manage delivery schedules and routes do not use any specific methods to expedite the delivery settlement process. This process is inefficient, takes a long time, increases costs and is prone to errors. Therefore, the Dijkstra algorithm has been used to improve the delivery management process. A delivery management system was developed to help managers and drivers schedule efficient ways to deliver product orders to recipients. Based on testing, the Dijkstra algorithm that has been included in the nearest route search function for the delivery process has worked well. This system is expected to improve the efficient management and delivery of orders
    corecore