3,182 research outputs found

    Tele-autonomous systems: New methods for projecting and coordinating intelligent action at a distance

    Get PDF
    There is a growing need for humans to perform complex remote operations and to extend the intelligence and experience of experts to distant applications. It is asserted that a blending of human intelligence, modern information technology, remote control, and intelligent autonomous systems is required, and have coined the term tele-autonomous technology, or tele-automation, for methods producing intelligent action at a distance. Tele-automation goes beyond autonomous control by blending in human intelligence. It goes beyond tele-operation by incorporating as much autonomy as possible and/or reasonable. A new approach is discussed for solving one of the fundamental problems facing tele-autonomous systems: The need to overcome time delays due to telemetry and signal propagation. New concepts are introduced called time and position clutches, that allow the time and position frames between the local user control and the remote device being controlled, to be desynchronized respectively. The design and implementation of these mechanisms are described in detail. It is demonstrated that these mechanisms lead to substantial telemanipulation performance improvements, including the result of improvements even in the absence of time delays. The new controls also yield a simple protocol for control handoffs of manipulation tasks between local operators and remote systems

    Tele-Autonomous control involving contact

    Get PDF
    Object localization and its application in tele-autonomous systems are studied. Two object localization algorithms are presented together with the methods of extracting several important types of object features. The first algorithm is based on line-segment to line-segment matching. Line range sensors are used to extract line-segment features from an object. The extracted features are matched to corresponding model features to compute the location of the object. The inputs of the second algorithm are not limited only to the line features. Featured points (point to point matching) and featured unit direction vectors (vector to vector matching) can also be used as the inputs of the algorithm, and there is no upper limit on the number of the features inputed. The algorithm will allow the use of redundant features to find a better solution. The algorithm uses dual number quaternions to represent the position and orientation of an object and uses the least squares optimization method to find an optimal solution for the object's location. The advantage of using this representation is that the method solves for the location estimation by minimizing a single cost function associated with the sum of the orientation and position errors and thus has a better performance on the estimation, both in accuracy and speed, than that of other similar algorithms. The difficulties when the operator is controlling a remote robot to perform manipulation tasks are also discussed. The main problems facing the operator are time delays on the signal transmission and the uncertainties of the remote environment. How object localization techniques can be used together with other techniques such as predictor display and time desynchronization to help to overcome these difficulties are then discussed

    New concepts in tele-autonomous systems

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/76226/1/AIAA-1987-1686-200.pd

    Relaxed Bell inequalities and Kochen-Specker theorems

    Full text link
    The combination of various physically plausible properties, such as no signaling, determinism, and experimental free will, is known to be incompatible with quantum correlations. Hence, these properties must be individually or jointly relaxed in any model of such correlations. The necessary degrees of relaxation are quantified here, via natural distance and information-theoretic measures. This allows quantitative comparisons between different models in terms of the resources, such as the number of bits, of randomness, communication, and/or correlation, that they require. For example, measurement dependence is a relatively strong resource for modeling singlet state correlations, with only 1/15 of one bit of correlation required between measurement settings and the underlying variable. It is shown how various 'relaxed' Bell inequalities may be obtained, which precisely specify the complementary degrees of relaxation required to model any given violation of a standard Bell inequality. The robustness of a class of Kochen-Specker theorems, to relaxation of measurement independence, is also investigated. It is shown that a theorem of Mermin remains valid unless measurement independence is relaxed by 1/3. The Conway-Kochen 'free will' theorem and a result of Hardy are less robust, failing if measurement independence is relaxed by only 6.5% and 4.5%, respectively. An appendix shows the existence of an outcome independent model is equivalent to the existence of a deterministic model.Comment: 19 pages (including 3 appendices); v3: minor clarifications, to appear in PR

    Paleoseismicity derived from piston coring methods, Explorer and Juan de Fuca Plate systems, Bristish Columbia

    Get PDF
    Coring of marine sediments has revealed deposits related to slope instability induced by seismicity on the western margin of Canada. Debris flows and turbidite sequences related to megathrust earthquakes have been recovered in six piston cores on the Juan de Fuca and Explorer tectonic plates, allowing comparison of the response of each plate to shaking during great earthquakes. Analyses of the recovered cores show that turbidite sequences associated with a megathrust quake occur on the Juan de Fuca Plate and do not occur in cores collected 90 km away at a similar site on the Explorer Plate. The record of subduction-related earthquake turbidite sequences is not complete at the Juan de Fuca study area and no reconstruction of megathrust quake periodicity is thus possible using this site alone. These results indicate that strong ground shaking is probably not experienced during large subduction earthquakes on the Explorer Plate

    Supersymmetric quantum cosmological billiards

    Full text link
    D=11 Supergravity near a space-like singularity admits a cosmological billiard description based on the hyperbolic Kac-Moody group E10. The quantization of this system via the supersymmetry constraint is shown to lead to wavefunctions involving automorphic (Maass wave) forms under the modular group W^+(E10)=PSL(2,O) with Dirichlet boundary conditions on the billiard domain. A general inequality for the Laplace eigenvalues of these automorphic forms implies that the wave function of the universe is generically complex and always tends to zero when approaching the initial singularity. We discuss possible implications of this result for the question of singularity resolution in quantum cosmology and comment on the differences with other approaches.Comment: 4 pages. v2: Added ref. Version to be published in PR

    Local deterministic model of singlet state correlations based on relaxing measurement independence

    Full text link
    The derivation of Bell inequalities requires an assumption of measurement independence, related to the amount of free will experimenters have in choosing measurement settings. Violation of these inequalities by singlet state correlations, as has been experimentally observed, brings this assumption into question. A simple measure of the degree of measurement independence is defined for correlation models, and it is shown that all spin correlations of a singlet state can be modeled via giving up a fraction of just 14% of measurement independence. The underlying model is deterministic and no-signalling. It may thus be favourably compared with other underlying models of the singlet state, which require maximum indeterminism or maximum signalling. A local deterministic model is also given that achieves the maximum possible violation of the well known Bell-CHSH inequality, at a cost of only 1/3 of measurement independence.Comment: Title updated to match published versio

    A real-world exploration into clinical outcomes of direct oral anticoagulant therapy in people with chronic kidney disease: a large hospital-based study

    Get PDF
    Background There is limited evidence to support definite clinical outcomes of direct oral anticoagulant (DOAC) therapy in chronic kidney disease (CKD). By identifying the important variables associated with clinical outcomes following DOAC administration in patients in different stages of CKD, this study aims to assess this evidence gap. Methods An anonymised dataset comprising 97,413 patients receiving DOAC therapy in a tertiary health setting was systematically extracted from the multidimensional electronic health records and prepared for analysis. Machine learning classifiers were applied to the prepared dataset to select the important features which informed covariate selection in multivariate logistic regression analysis. Results For both CKD and non-CKD DOAC users, features such as length of stay, treatment days, and age were ranked highest for relevance to adverse outcomes like death and stroke. Patients with Stage 3a CKD had significantly higher odds of ischaemic stroke (OR 2.45, 95% Cl: 2.10–2.86; p = 0.001) and lower odds of all-cause mortality (OR 0.87, 95% Cl: 0.79–0.95; p = 0.001) on apixaban therapy. In patients with CKD (Stage 5) receiving apixaban, the odds of death were significantly lowered (OR 0.28, 95% Cl: 0.14–0.58; p = 0.001), while the effect on ischaemic stroke was insignificant. Conclusions A positive effect of DOAC therapy was observed in advanced CKD. Key factors influencing clinical outcomes following DOAC administration in patients in different stages of CKD were identified. These are crucial for designing more advanced studies to explore safer and more effective DOAC therapy for the population

    Dynamic Posterior Instability Test: A New Test for Posterior Glenohumeral Instability

    Get PDF
    BACKGROUND: Recurrent posterior shoulder instability has become an increasingly recognized cause of shoulder disability, especially among athletes. The presentation can be vague and therefore its clinical diagnosis is often overlooked. Few diagnostic tests exist and these tests are difficult to perform in an anxious and apprehensive patient. Many also lack high specificity and do not effectively distinguish posterior labral tears from other shoulder pathologies. As a result, the authors worked to develop a new test, the dynamic posterior instability test (DPIT). The purpose of this study was to describe the DPIT as well as a modified DPIT test and to evaluate the accuracy of these tests in detecting posterior labral pathology. It was hypothesized that the dynamic posterior instability test (DPIT) would improve accuracy in the evaluation of posterior labral tears. METHODS: For a 9-month period, the DPIT and modified DPIT tests were performed on all patients evaluated for posterior instability of the shoulder. The records of all patients who had undergone a posterior labral repair (type VIII SLAP and posterior labral tears) were reviewed. The results of the DPIT and modified DPIT tests were compared to intra-operative findings. Anterior glenohumeral instability patients were also evaluated with these tests to serve as a control. RESULTS: Fifty-one patients had a positive and 3 patients had a negative DPIT test. Of the anterior instability patients, there was 1 positive and 19 negative test results. The sensitivity of the DPIT test was 94.4%, specificity 95%, the positive predictive value 0.98, and the negative predictive value 0.86. The results of the modified DPIT were the same as the DPIT test. CONCLUSIONS: The DPIT and modified DPIT tests provide a valuable new tool when combined with history and other physical examination findings improve the accuracy of diagnosis of posterior shoulder instability
    • …
    corecore