6,438 research outputs found

    Prioritized LT Codes

    Get PDF
    The original Luby Transform (LT) coding scheme is extended to account for data transmissions where some information symbols in a message block are more important than others. Prioritized LT codes provide unequal error protection (UEP) of data on an erasure channel by modifying the original LT encoder. The prioritized algorithm improves high-priority data protection without penalizing low-priority data recovery. Moreover, low-latency decoding is also obtained for high-priority data due to fast encoding. Prioritized LT codes only require a slight change in the original encoding algorithm, and no changes at all at the decoder. Hence, with a small complexity increase in the LT encoder, an improved UEP and low-decoding latency performance for high-priority data can be achieved. LT encoding partitions a data stream into fixed-sized message blocks each with a constant number of information symbols. To generate a code symbol from the information symbols in a message, the Robust-Soliton probability distribution is first applied in order to determine the number of information symbols to be used to compute the code symbol. Then, the specific information symbols are chosen uniform randomly from the message block. Finally, the selected information symbols are XORed to form the code symbol. The Prioritized LT code construction includes an additional restriction that code symbols formed by a relatively small number of XORed information symbols select some of these information symbols from the pool of high-priority data. Once high-priority data are fully covered, encoding continues with the conventional LT approach where code symbols are generated by selecting information symbols from the entire message block including all different priorities. Therefore, if code symbols derived from high-priority data experience an unusual high number of erasures, Prioritized LT codes can still reliably recover both high- and low-priority data. This hybrid approach decides not only "how to encode" but also "what to encode" to achieve UEP. Another advantage of the priority encoding process is that the majority of high-priority data can be decoded sooner since only a small number of code symbols are required to reconstruct high-priority data. This approach increases the likelihood that high-priority data is decoded first over low-priority data. The Prioritized LT code scheme achieves an improvement in high-priority data decoding performance as well as overall information recovery without penalizing the decoding of low-priority data, assuming high-priority data is no more than half of a message block. The cost is in the additional complexity required in the encoder. If extra computation resource is available at the transmitter, image, voice, and video transmission quality in terrestrial and space communications can benefit from accurate use of redundancy in protecting data with varying priorities

    Constraint Embedding Technique for Multibody System Dynamics

    Get PDF
    Multibody dynamics play a critical role in simulation testbeds for space missions. There has been a considerable interest in the development of efficient computational algorithms for solving the dynamics of multibody systems. Mass matrix factorization and inversion techniques and the O(N) class of forward dynamics algorithms developed using a spatial operator algebra stand out as important breakthrough on this front. Techniques such as these provide the efficient algorithms and methods for the application and implementation of such multibody dynamics models. However, these methods are limited only to tree-topology multibody systems. Closed-chain topology systems require different techniques that are not as efficient or as broad as those for tree-topology systems. The closed-chain forward dynamics approach consists of treating the closed-chain topology as a tree-topology system subject to additional closure constraints. The resulting forward dynamics solution consists of: (a) ignoring the closure constraints and using the O(N) algorithm to solve for the free unconstrained accelerations for the system; (b) using the tree-topology solution to compute a correction force to enforce the closure constraints; and (c) correcting the unconstrained accelerations with correction accelerations resulting from the correction forces. This constraint-embedding technique shows how to use direct embedding to eliminate local closure-loops in the system and effectively convert the system back to a tree-topology system. At this point, standard tree-topology techniques can be brought to bear on the problem. The approach uses a spatial operator algebra approach to formulating the equations of motion. The operators are block-partitioned around the local body subgroups to convert them into aggregate bodies. Mass matrix operator factorization and inversion techniques are applied to the reformulated tree-topology system. Thus in essence, the new technique allows conversion of a system with closure-constraints into an equivalent tree-topology system, and thus allows one to take advantage of the host of techniques available to the latter class of systems. This technology is highly suitable for the class of multibody systems where the closure-constraints are local, i.e., where they are confined to small groupings of bodies within the system. Important examples of such local closure-constraints are constraints associated with four-bar linkages, geared motors, differential suspensions, etc. One can eliminate these closure-constraints and convert the system into a tree-topology system by embedding the constraints directly into the system dynamics and effectively replacing the body groupings with virtual aggregate bodies. Once eliminated, one can apply the well-known results and algorithms for tree-topology systems to solve the dynamics of such closed-chain system

    Bending the Rules of Evidence

    Get PDF
    The evidence rules have well-established, standard textual meanings—meanings that evidence professors teach their law students every year. Yet, despite the rules’ clarity, courts misapply them across a wide array of cases: Judges allow past acts to bypass the propensity prohibition, squeeze hearsay into facially inapplicable exceptions, and poke holes in supposedly ironclad privileges. And that’s just the beginning. The evidence literature sees these misapplications as mistakes by inept trial judges. This Article takes a very different view. These “mistakes” are often not mistakes at all, but rather instances in which courts are intentionally bending the rules of evidence. Codified evidentiary rules are typically rigid, leaving little room for judicial discretion. When unforgiving rules require exclusion of evidence that seems essential to a case, courts face a Hobson’s choice: Stay faithful to the rules, or instead preserve the integrity of the factfinding process. Frequently, courts have found a third way, claiming nominal fidelity to a rule while contorting it to ensure the evidence’s admissibility. This Article identifies and explores this bending of the rules of evidence. After tracing rule bending across many evidence doctrines, the Article explores the normative roots of the problem. Codification has ossified evidence law, effectively driving judges underground in the search for solutions to their evidentiary dilemmas. Rather than trying to suppress rule bending, we advocate legitimizing it. Specifically, the Article proposes a residual exception that would enable trial courts to admit essential evidence in carefully defined circumstances. Such an exception would bring rule bending out of the shadows and into the light with benefits to transparency, legitimacy, and accountability. And perhaps most importantly, it will reestablish trial courts as a partner in the development of evidence law

    Bending the Rules of Evidence

    Get PDF
    The evidence rules have well-established, standard textual meanings—meanings that evidence professors teach their law students every year. Yet, despite the rules’ clarity, courts misapply them across a wide array of cases: Judges allow past acts to bypass the propensity prohibition, squeeze hearsay into facially inapplicable exceptions, and poke holes in supposedly ironclad privileges. And that’s just the beginning. The evidence literature sees these misapplications as mistakes by inept trial judges. This Article takes a very different view. These “mistakes” are often not mistakes at all, but rather instances in which courts are intentionally bending the rules of evidence. Codified evidentiary rules are typically rigid, leaving little room for judicial discretion. When unforgiving rules require exclusion of evidence that seems essential to a case, courts face a Hobson’s choice: Stay faithful to the rules, or instead preserve the integrity of the factfinding process. Frequently, courts have found a third way, claiming nominal fidelity to a rule while contorting it to ensure the evidence’s admissibility. This Article identifies and explores this bending of the rules of evidence. After tracing rule bending across many evidence doctrines, the Article explores the normative roots of the problem. Codification has ossified evidence law, effectively driving judges underground in the search for solutions to their evidentiary dilemmas. Rather than trying to suppress rule bending, we advocate legitimizing it. Specifically, the Article proposes a residual exception that would enable trial courts to admit essential evidence in carefully defined circumstances. Such an exception would bring rule bending out of the shadows and into the light with benefits to transparency, legitimacy, and accountability. And perhaps most importantly, it will reestablish trial courts as a partner in the development of evidence law

    Universality properties of the stationary states in the one-dimensional coagulation-diffusion model with external particle input

    Full text link
    We investigate with the help of analytical and numerical methods the reaction A+A->A on a one-dimensional lattice opened at one end and with an input of particles at the other end. We show that if the diffusion rates to the left and to the right are equal, for large x, the particle concentration c(x) behaves like As/x (x measures the distance to the input end). If the diffusion rate in the direction pointing away from the source is larger than the one corresponding to the opposite direction the particle concentration behaves like Aa/sqrt(x). The constants As and Aa are independent of the input and the two coagulation rates. The universality of Aa comes as a surprise since in the asymmetric case the system has a massive spectrum.Comment: 27 pages, LaTeX, including three postscript figures, to appear in J. Stat. Phy

    Human-robot collaborative task planning using anticipatory brain responses

    Get PDF
    Human-robot interaction (HRI) describes scenarios in which both human and robot work as partners, sharing the same environment or complementing each other on a joint task. HRI is characterized by the need for high adaptability and flexibility of robotic systems toward their human interaction partners. One of the major challenges in HRI is task planning with dynamic subtask assignment, which is particularly challenging when subtask choices of the human are not readily accessible by the robot. In the present work, we explore the feasibility of using electroencephalogram (EEG) based neuro-cognitive measures for online robot learning of dynamic subtask assignment. To this end, we demonstrate in an experimental human subject study, featuring a joint HRI task with a UR10 robotic manipulator, the presence of EEG measures indicative of a human partner anticipating a takeover situation from human to robot or vice-versa. The present work further proposes a reinforcement learning based algorithm employing these measures as a neuronal feedback signal from the human to the robot for dynamic learning of subtask-assignment. The efficacy of this algorithm is validated in a simulation-based study. The simulation results reveal that even with relatively low decoding accuracies, successful robot learning of subtask-assignment is feasible, with around 80% choice accuracy among four subtasks within 17 minutes of collaboration. The simulation results further reveal that scalability to more subtasks is feasible and mainly accompanied with longer robot learning times. These findings demonstrate the usability of EEG-based neuro-cognitive measures to mediate the complex and largely unsolved problem of human-robot collaborative task planning

    Functional genomics screen identifies YAP1 as a key determinant to enhance treatment sensitivity in lung cancer cells

    Get PDF
    Survival for lung cancer patients remains dismal and is largely attributed to treatment resistance. To identify novel target genes the modulation of which could modify platinum resistance, we performed a high-throughput RNAi screen and identified Yes-associated protein (YAP1), a transcription coactivator and a known oncogene, as a potential actionable candidate. YAP1 ablation significantly improved sensitivities not only to cisplatin but also to ionizing radiation, both of which are DNA-damaging interventions, in non-small cell lung cancer (NSCLC) cells. Overall YAP1 was expressed in 75% of NSCLC specimens, whereas nuclear YAP1 which is the active form was present in 45% of 124 resected NSCLC. Interestingly, EGFR-mutated or KRAS-mutated NSCLC were associated with higher nuclear YAP1 staining in comparison to EGFR/KRAS wild-type. Relevantly, YAP1 downregulation improved sensitivity to erlotinib, an EGFR inhibitor. A pharmacological inhibitor of YAP1 signaling, verteporfin also synergized with cisplatin, radiation and erlotinib in NSCLC cells by potentiating cisplatin and radiation-related double-stranded breaks and decreasing expression of YAP1 and EGFR. Taken together, our study is the first to indicate the potential role of YAP1 as a common modulator of resistance mechanisms and a potential novel, actionable target that can improve responses to platinum, radiation and EGFR-targeted therapy in lung cancer

    Complete intracranial response to talimogene laherparepvec (T-Vec), pembrolizumab and whole brain radiotherapy in a patient with melanoma brain metastases refractory to dual checkpoint-inhibition

    Get PDF
    Background Immunotherapy, in particular checkpoint blockade, has changed the clinical landscape of metastatic melanoma. Nonetheless, the majority of patients will either be primary refractory or progress over follow up. Management of patients progressing on first-line immunotherapy remains challenging. Expanded treatment options with combination immunotherapy has demonstrated efficacy in patients previously unresponsive to single agent or alternative combination therapy. Case presentation We describe the case of a patient with diffusely metastatic melanoma, including brain metastases, who, despite being treated with stereotactic radiosurgery and dual CTLA-4/PD-1 blockade (ipilimumab/nivolumab), developed systemic disease progression and innumerable brain metastases. This patient achieved a complete CNS response and partial systemic response with standard whole brain radiation therapy (WBRT) combined with Talimogene laherparepvec (T-Vec) and pembrolizumab. Conclusion Patients who do not respond to one immunotherapy combination may respond during treatment with an alternate combination, even in the presence of multiple brain metastases. Biomarkers are needed to assist clinicians in evidence based clinical decision making after progression on first line immunotherapy to determine whether response can be achieved with second line immunotherapy

    Dual mechanism of brain injury and novel treatment strategy in maple syrup urine disease

    Get PDF
    Maple syrup urine disease (MSUD) is an inherited disorder of branched-chain amino acid metabolism presenting with lifethreatening cerebral oedema and dysmyelination in affected individuals. Treatment requires life-long dietary restriction and monitoring of branched-chain amino acids to avoid brain injury. Despite careful management, children commonly suffer metabolic decompensation in the context of catabolic stress associated with non-specific illness. The mechanisms underlying this decompensation and brain injury are poorly understood. Using recently developed mouse models of classic and intermediate maple syrup urine disease, we assessed biochemical, behavioural and neuropathological changes that occurred during encephalopathy in these mice. Here, we show that rapid brain leucine accumulation displaces other essential amino acids resulting in neurotransmitter depletion and disruption of normal brain growth and development. A novel approach of administering norleucine to heterozygous mothers of classic maple syrup urine disease pups reduced branched-chain amino acid accumulation in milk as well as blood and brain of these pups to enhance survival. Similarly, norleucine substantially delayed encephalopathy in intermediate maple syrup urine disease mice placed on a high protein diet that mimics the catabolic stress shown to cause encephalopathy in human maple syrup urine disease. Current findings suggest two converging mechanisms of brain injury in maple syrup urine disease including: (i) neurotransmitter deficiencies and growth restriction associated with branchedchain amino acid accumulation and (ii) energy deprivation through Krebs cycle disruption associated with branched-chain ketoacid accumulation. Both classic and intermediate models appear to be useful to study the mechanism of brain injury and potential treatment strategies for maple syrup urine disease. Norleucine should be further tested as a potential treatment to prevent encephalopathy in children with maple syrup urine disease during catabolic stress
    • …
    corecore