12,532 research outputs found
Recommended from our members
ViFi: accurate detection of viral integration and mRNA fusion reveals indiscriminate and unregulated transcription in proximal genomic regions in cervical cancer.
The integration of viral sequences into the host genome is an important driver of tumorigenesis in many viral mediated cancers, notably cervical cancer and hepatocellular carcinoma. We present ViFi, a computational method that combines phylogenetic methods with reference-based read mapping to detect viral integrations. In contrast with read-based reference mapping approaches, ViFi is faster, and shows high precision and sensitivity on both simulated and biological data, even when the integrated virus is a novel strain or highly mutated. We applied ViFi to matched genomic and mRNA data from 68 cervical cancer samples from TCGA and found high concordance between the two. Surprisingly, viral integration resulted in a dramatic transcriptional upregulation in all proximal elements, including LINEs and LTRs that are not normally transcribed. This upregulation is highly correlated with the presence of a viral gene fused with a downstream human element. Moreover, genomic rearrangements suggest the formation of apparent circular extrachromosomal (ecDNA) human-viral structures. Our results suggest the presence of apparent small circular fusion viral/human ecDNA, which correlates with indiscriminate and unregulated expression of proximal genomic elements, potentially contributing to the pathogenesis of HPV-associated cervical cancers. ViFi is available at https://github.com/namphuon/ViFi
Ocean Surface Maps From Blending Disparate Data Through Normal Mode Analysis
Rapid environmental assessment is conducted using disparate data sources in the northwestern Gulf of Mexico. An overview of significant physical features in the Gulf highlights the complexities of the large and meso-scale circulations. Spectral analysis of high resolution current meter and drifter data reveals the significant forcing features detectable by readily available observing techniques. These observations are combined with boundary data extracted from the U.S. Navy\u27s Modular Ocean Data Assimilation System (MODAS) through Normal Mode Analysis (NMA). The NMA blending process is described, and surface maps of velocity and convergence are produced. Using statistical and qualitative techniques, the NMA generated ânowcastsâ are analyzed to determine the significant modes applicable to varying oceanographic situations. Fundamental guidance for choosing the number and type of modes in an REA scenario are noted. The NMA method proves to be a useful tool in constructing analytic surface maps when the component modes are wisely chosen
Chaotic exploration and learning of locomotion behaviours
We present a general and fully dynamic neural system, which exploits intrinsic chaotic dynamics, for the real-time goal-directed exploration and learning of the possible locomotion patterns of an articulated robot of an arbitrary morphology in an unknown environment. The controller is modeled as a network of neural oscillators that are initially coupled only through physical embodiment, and goal-directed exploration of coordinated motor patterns is achieved by chaotic search using adaptive bifurcation. The phase space of the indirectly coupled neural-body-environment system contains multiple transient or permanent self-organized dynamics, each of which is a candidate for a locomotion behavior. The adaptive bifurcation enables the system orbit to wander through various phase-coordinated states, using its intrinsic chaotic dynamics as a driving force, and stabilizes on to one of the states matching the given goal criteria. In order to improve the sustainability of useful transient patterns, sensory homeostasis has been introduced, which results in an increased diversity of motor outputs, thus achieving multiscale exploration. A rhythmic pattern discovered by this process is memorized and sustained by changing the wiring between initially disconnected oscillators using an adaptive synchronization method. Our results show that the novel neurorobotic system is able to create and learn multiple locomotion behaviors for a wide range of body configurations and physical environments and can readapt in realtime after sustaining damage
Evolutionary Curriculum Training for DRL-Based Navigation Systems
In recent years, Deep Reinforcement Learning (DRL) has emerged as a promising
method for robot collision avoidance. However, such DRL models often come with
limitations, such as adapting effectively to structured environments containing
various pedestrians. In order to solve this difficulty, previous research has
attempted a few approaches, including training an end-to-end solution by
integrating a waypoint planner with DRL and developing a multimodal solution to
mitigate the drawbacks of the DRL model. However, these approaches have
encountered several issues, including slow training times, scalability
challenges, and poor coordination among different models. To address these
challenges, this paper introduces a novel approach called evolutionary
curriculum training to tackle these challenges. The primary goal of
evolutionary curriculum training is to evaluate the collision avoidance model's
competency in various scenarios and create curricula to enhance its
insufficient skills. The paper introduces an innovative evaluation technique to
assess the DRL model's performance in navigating structured maps and avoiding
dynamic obstacles. Additionally, an evolutionary training environment generates
all the curriculum to improve the DRL model's inadequate skills tested in the
previous evaluation. We benchmark the performance of our model across five
structured environments to validate the hypothesis that this evolutionary
training environment leads to a higher success rate and a lower average number
of collisions. Further details and results at our project website.Comment: Robotics: Science and System
The link between great earthquakes and the subduction of oceanic fracture zones
Giant subduction earthquakes are known to occur in areas not previously identified as prone to high seismic risk. This highlights the need to better identify subduction zone segments potentially dominated by relatively long (up to 1000 yr and more) recurrence times of giant earthquakes. We construct a model for the geometry of subduction coupling zones and combine it with global geophysical data sets to demonstrate that the occurrence of great (magnitude â„ 8) subduction earthquakes is strongly biased towards regions associated with intersections of oceanic fracture zones and subduction zones. We use a computational recommendation technology, a type of information filtering system technique widely used in searching, sorting, classifying, and filtering very large, statistically skewed data sets on the Internet, to demonstrate a robust association and rule out a random effect. Fracture zoneâsubduction zone intersection regions, representing only 25% of the global subduction coupling zone, are linked with 13 of the 15 largest (magnitude <i>M</i><sub>w</sub> â„ 8.6) and half of the 50 largest (magnitude <i>M</i><sub>w</sub> â„ 8.4) earthquakes. In contrast, subducting volcanic ridges and chains are only biased towards smaller earthquakes (magnitude < 8). The associations captured by our statistical analysis can be conceptually related to physical differences between subducting fracture zones and volcanic chains/ridges. Fracture zones are characterised by laterally continuous, uplifted ridges that represent normal ocean crust with a high degree of structural integrity, causing strong, persistent coupling in the subduction interface. Smaller volcanic ridges and chains have a relatively fragile heterogeneous internal structure and are separated from the underlying ocean crust by a detachment interface, resulting in weak coupling and relatively small earthquakes, providing a conceptual basis for the observed dichotomy
Routing on the Channel Dependency Graph:: A New Approach to Deadlock-Free, Destination-Based, High-Performance Routing for Lossless Interconnection Networks
In the pursuit for ever-increasing compute power, and with Moore's law slowly coming to an end, high-performance computing started to scale-out to larger systems. Alongside the increasing system size, the interconnection network is growing to accommodate and connect tens of thousands of compute nodes. These networks have a large influence on total cost, application performance, energy consumption, and overall system efficiency of the supercomputer. Unfortunately, state-of-the-art routing algorithms, which define the packet paths through the network, do not utilize this important resource efficiently. Topology-aware routing algorithms become increasingly inapplicable, due to irregular topologies, which either are irregular by design, or most often a result of hardware failures. Exchanging faulty network components potentially requires whole system downtime further increasing the cost of the failure. This management approach becomes more and more impractical due to the scale of today's networks and the accompanying steady decrease of the mean time between failures. Alternative methods of operating and maintaining these high-performance interconnects, both in terms of hardware- and software-management, are necessary to mitigate negative effects experienced by scientific applications executed on the supercomputer. However, existing topology-agnostic routing algorithms either suffer from poor load balancing or are not bounded in the number of virtual channels needed to resolve deadlocks in the routing tables.
Using the fail-in-place strategy, a well-established method for storage systems to repair only critical component failures, is a feasible solution for current and future HPC interconnects as well as other large-scale installations such as data center networks. Although, an appropriate combination of topology and routing algorithm is required to minimize the throughput degradation for the entire system. This thesis contributes a network simulation toolchain to facilitate the process of finding a suitable combination, either during system design or while it is in operation. On top of this foundation, a key contribution is a novel scheduling-aware routing, which reduces fault-induced throughput degradation while improving overall network utilization. The scheduling-aware routing performs frequent property preserving routing updates to optimize the path balancing for simultaneously running batch jobs. The increased deployment of lossless interconnection networks, in conjunction with fail-in-place modes of operation and topology-agnostic, scheduling-aware routing algorithms, necessitates new solutions to solve the routing-deadlock problem. Therefore, this thesis further advances the state-of-the-art by introducing a novel concept of routing on the channel dependency graph, which allows the design of an universally applicable destination-based routing capable of optimizing the path balancing without exceeding a given number of virtual channels, which are a common hardware limitation. This disruptive innovation enables implicit deadlock-avoidance during path calculation, instead of solving both problems separately as all previous solutions
- âŠ