7,653 research outputs found

    Video Editing with Single Responsibility Principle

    Get PDF
    Non-linear video editors are typically extremely big and complicated programs that are able to do many complicated operations for all kinds of video formats. A professional video editor should be able to encode and decode all kinds of formats, cut and join video clips, apply flters to those videos (including color correction), analyze those videos in real time (histograms and tracking), do audio editing with digital signal processing, render 3D graphics and show them in real time, render texts, animate all efects and texts with their properties (position, opacity, etc.) with keyframes, and fnally a professional video editor should be able to play these all clips in real time while showing the current output of the video editing. Of course, there is also rendering, but encoding was mentioned earlier and when the video is already processed in real time, the rendering should be trivial. The main challenge, which this thesis attempts to solve, is that these applications are extremely complex and contain multiple components that could be applications on their own. If one component crashes, the whole program crashes. The exact point of single responsibility principle (can be seen as part of UNIX philosophy in this case) is to have applications that do one thing and do it well. This means that all the parts of the modern complex video editor would be divided into small modular parts that function on their own but are able to talk with each other. This also means that the end user does not lose all the progress that has been made, which in turn will make the video editing much easier and smoother experience. This kind of modularity and its possibilities are researched in this thesis

    Caching, crashing & concurrency - verification under adverse conditions

    Get PDF
    The formal development of large-scale software systems is a complex and time-consuming effort. Generally, its main goal is to prove the functional correctness of the resulting system. This goal becomes significantly harder to reach when the verification must be performed under adverse conditions. When aiming for a realistic system, the implementation must be compatible with the “real world”: it must work with existing system interfaces, cope with uncontrollable events such as power cuts, and offer competitive performance by using mechanisms like caching or concurrency. The Flashix project is an example of such a development, in which a fully verified file system for flash memory has been developed. The project is a long-term team effort and resulted in a sequential, functionally correct and crash-safe implementation after its first project phase. This thesis continues the work by performing modular extensions to the file system with performance-oriented mechanisms that mainly involve caching and concurrency, always considering crash-safety. As a first contribution, this thesis presents a modular verification methodology for destructive heap algorithms. The approach simplifies the verification by separating reasoning about specifics of heap implementations, like pointer aliasing, from the reasoning about conceptual correctness arguments. The second contribution of this thesis is a novel correctness criterion for crash-safe, cached, and concurrent file systems. A natural criterion for crash-safety is defined in terms of system histories, matching the behavior of fine-grained caches using complex synchronization mechanisms that reorder operations. The third contribution comprises methods for verifying functional correctness and crash-safety of caching mechanisms and concurrency in file systems. A reference implementation for crash-safe caches of high-level data structures is given, and a strategy for proving crash-safety is demonstrated and applied. A compatible concurrent implementation of the top layer of file systems is presented, using a mechanism for the efficient management of fine-grained file locking, and a concurrent version of garbage collection is realized. Both concurrency extensions are proven to be correct by applying atomicity refinement, a methodology for proving linearizability. Finally, this thesis contributes a new iteration of executable code for the Flashix file system. With the efficiency extensions introduced with this thesis, Flashix covers all performance-oriented concepts of realistic file system implementations and achieves competitiveness with state-of-the-art flash file systems

    Structures of Phytophthora RXLR Effector Proteins: a conserved but adaptable fold underpins functional diversity

    Get PDF
    Phytopathogens deliver effector proteins inside host plant cells to promote infection. These proteins can also be sensed by the plant immune system, leading to restriction of pathogen growth. Effector genes can display signatures of positive selection and rapid evolution, presumably a consequence of their co-evolutionary arms race with plants. The molecular mechanisms underlying how effectors evolve to gain new virulence functions and/or evade the plant immune system are poorly understood. Here, we report the crystal structures of the effector domains from two oomycete RXLR proteins, Phytophthora capsici AVR3a11 and Phytophthora infestans PexRD2. Despite sharin

    Classical simulation complexity of extended Clifford circuits

    Full text link
    Clifford gates are a winsome class of quantum operations combining mathematical elegance with physical significance. The Gottesman-Knill theorem asserts that Clifford computations can be classically efficiently simulated but this is true only in a suitably restricted setting. Here we consider Clifford computations with a variety of additional ingredients: (a) strong vs. weak simulation, (b) inputs being computational basis states vs. general product states, (c) adaptive vs. non-adaptive choices of gates for circuits involving intermediate measurements, (d) single line outputs vs. multi-line outputs. We consider the classical simulation complexity of all combinations of these ingredients and show that many are not classically efficiently simulatable (subject to common complexity assumptions such as P not equal to NP). Our results reveal a surprising proximity of classical to quantum computing power viz. a class of classically simulatable quantum circuits which yields universal quantum computation if extended by a purely classical additional ingredient that does not extend the class of quantum processes occurring.Comment: 17 pages, 1 figur
    • …
    corecore