4 research outputs found
Easily Solving Dynamic Programming Problems in Haskell by Memoization of Hylomorphisms
Dynamic Programming is a well known algorithmic technique that solves problems
by a combination of dividing a problem into subproblems and using memoization
to avoid an exponential growth of the costs. We show how to implement Dynamic
Programming in Haskell using a variation of hylomorphisms that includes memoization. Our implementation uses polymorphism so the same function can return the
best score or the solution to the problem based on the type of the returned value
Temporal ordering of substitutions in RNA evolution : uncovering the structural evolution of the human accelerated region 1
The Human Accelerated Region 1 (HAR1) is the most rapidly evolving region in the human genome. It is part of two overlapping long non-coding RNAs, has a length of only 118 nucleotides and features 18 human specific changes compared to an ancestral sequence that is extremely well conserved across non-human primates. The human HAR1 forms a stable secondary structure that is strikingly different from the one in chimpanzee as well as other closely related species, again emphasizing its human-specific evolutionary history. This suggests that positive selection has acted to stabilize human-specific features in the ensemble of HAR1 secondary structures. To investigate the evolutionary history of the human HAR1 structure, we developed a computational model that evaluates the relative likelihood of evolutionary trajectories as a probabilistic version of a Hamiltonian path problem. The model predicts that the most likely last step in turning the ancestral primate HAR1 into the human HAR1 was exactly the substitution that distinguishes the modern human HAR1 sequence from that of Denisovan, an archaic human, providing independent support for our model. The MutationOrder software is available for download and can be applied to other instances of RNA structure evolution
Histo- and Dynamorphisms Revisited
Dynamic programming algorithms embody a widely used programming technique that optimizes recursively defined equations that have repeating subproblems. The standard solution uses arrays to share common results between successive steps, and while effective, this fails to exploit the structural properties present in these problems. Histomorphisms and dynamorphisms have been introduced to expresses such algorithms in terms of structured recursion schemes that leverage this structure. In this paper, we revisit and relate these schemes and show how they can be expressed in terms of recursion schemes from comonads, as well as from recursive coalgebras. Our constructions rely on properties of bialgebras and dicoalgebras, and we are careful to consider optimizations and efficiency concerns. Throughout the paper we illustrate these techniques through several worked-out examples discussed in a tutorial style, and show how a recursive specification can be expressed both as an array-based algorithm as well as one that uses recursion schemes