78 research outputs found

    An asynchronous method for cloud-based rendering

    Get PDF
    Interactive high-fidelity rendering is still unachievable on many consumer devices. Cloud gaming services have shown promise in delivering interactive graphics beyond the individual capabilities of user devices. However, a number of shortcomings are manifest in these systems: high network bandwidths are required for higher resolutions and input lag due to network fluctuations heavily disrupts user experience. In this paper, we present a scalable solution for interactive high-fidelity graphics based on a distributed rendering pipeline where direct lighting is computed on the client device and indirect lighting in the cloud. The client device keeps a local cache for indirect lighting which is asynchronously updated using an object space representation; this allows us to achieve interactive rates that are unconstrained by network performance for a wide range of display resolutions that are also robust to input lag. Furthermore, in multi-user environments, the computation of indirect lighting is amortised over participating clients

    Graph-based segmentation and scene understanding for context-free point clouds

    Get PDF
    The acquisition of 3D point clouds representing the surface structure of real-world scenes has become common practice in many areas including architecture, cultural heritage and urban planning. Improvements in sample acquisition rates and precision are contributing to an increase in size and quality of point cloud data. The management of these large volumes of data is quickly becoming a challenge, leading to the design of algorithms intended to analyse and decrease the complexity of this data. Point cloud segmentation algorithms partition point clouds for better management, and scene understanding algorithms identify the components of a scene in the presence of considerable clutter and noise. In many cases, segmentation algorithms operate within the remit of a specific context, wherein their effectiveness is measured. Similarly, scene understanding algorithms depend on specific scene properties and fail to identify objects in a number of situations. This work addresses this lack of generality in current segmentation and scene understanding processes, and proposes methods for point clouds acquired using diverse scanning technologies in a wide spectrum of contexts. The approach to segmentation proposed by this work partitions a point cloud with minimal information, abstracting the data into a set of connected segment primitives to support efficient manipulation. A graph-based query mechanism is used to express further relations between segments and provide the building blocks for scene understanding. The presented method for scene understanding is agnostic of scene specific context and supports both supervised and unsupervised approaches. In the former, a graph-based object descriptor is derived from a training process and used in object identification. The latter approach applies pattern matching to identify regular structures. A novel external memory algorithm based on a hybrid spatial subdivision technique is introduced to handle very large point clouds and accelerate the computation of the k-nearest neighbour function. Segmentation has been successfully applied to extract segments representing geographic landmarks and architectural features from a variety of point clouds, whereas scene understanding has been successfully applied to indoor scenes on which other methods fail. The overall results demonstrate that the context-agnostic methods presented in this work can be successfully employed to manage the complexity of ever growing repositories

    Iterative partitioning and labelling of point cloud data

    Get PDF
    Over the past few years the acquisition of 3D point information representing the structure of real-world objects has become common practice in many areas. This acquisition process has traditionally been carried out using 3D scanning devices based on laser or structured light techniques. Professional grade 3D scanners are nowadays capable of producing highly accurate data at sampling rates of approximately a million points per second. Moreover the popularisation of algorithms and tools capable of generating relatively accurate virtual representations of real-world scenes from photographs without the need of expensive and specialised hardware has led to an increase in the amount and availability of 3D point cloud data. The management and processing of these huge volumes of scanned data is quickly becoming a problem.peer-reviewe

    Point-cloud decomposition for scene analysis and understanding

    Get PDF
    Over the past decade digital photography has taken over traditional film based photography. The same can be said for video productions. A practice traditionally reserved only for the few has nowadays become commonplace. This has led to the creation of massive repositories of digital photographs and videos in various formats. Recently, another digital representation has started picking up, namely one that captures the geometry of real-world objects. In the latter, instead of using light sensors to store per pixel colour values of visible objects, depth sensors (and additional hardware) are used to record the distance (depth) to the visible objects in a scene. This depth information can be used to create virtual reconstructions of the objects and scenes captured. Various technologies have been proposed and successfully used to acquire this information, ranging from very expensive equipment (e.g. long range 3D scanners) to commodity hard- ware (e.g. Microsoft Kinect and Asus Xtion). A considerable amount of research has also looked into the extraction of accurate depth information from multi- view photographs of objects using specialised software (e.g. Microsoft Photo- Synth amongst many others). Recently, rapid advances in ubiqutous computing, has also brought to the masses the possibility of capturing the world around them in 3D using smartphones and tablets (e.g. http://structure.io/).peer-reviewe

    A risk driven state merging algorithm for learning DFAs

    Get PDF
    When humans efficiently infer complex functions from a relatively few but well- chosen examples, something beyond exhaustive search must probably be at work. Different heuristics are often made use of during this learning process in order to efficiently infer target functions. Our current research focuses on different heuristics through which regular grammars can be efficiently inferred from a minimal amount of examples. A brief introduction to the theory of grammatical inference is given, followed by a brief discussion of the current state of the art in automata learning and methods currently under development which we believe can improve automata learning when using sparse data.peer-reviewe

    Search diversification techniques for grammatical inference

    Get PDF
    Grammatical Inference (GI) addresses the problem of learning a grammar G, from a finite set of strings generated by G. By using GI techniques we want to be able to learn relations between syntactically structured sequences. This process of inferring the target grammar G can easily be posed as a search problem through a lattice of possible solutions. The vast majority of research being carried out in this area focuses on non-monotonic searches, i.e. use the same heuristic function to perform a depth first search into the lattice until a hypothesis is chosen. EDSM and S-EDSM are prime examples of this technique. In this paper we discuss the introduction of diversification into our search space [5]. By introducing diversification through pairwise incompatible merges, we traverse multiple disjoint paths in the search lattice and obtain better results for the inference process.peer-reviewe

    Non-monotonic search strategies for grammatical inference

    Get PDF
    Advances in DFA learning algorithms have been relatively slow over the past few years. After the introduction of Rodney Price’s EDSM heuristic [4], pushing the limits of DFA learning appears to be a very difficult task. The S-EDSM heuristic proposed in [6, 1], manages to improve slightly on what EDSM can do. In this paper we outline our current research results, and propose the use of non-monotonic search strategies in order to improve the success rate of DFA inference.peer-reviewe

    Automatic interface generation for enumerative model checking

    Get PDF
    Explicit state model checking techniques suffer from the state explosion problem [7]. Interfaces [6, 2] can provide a partial solution to this problem by means of compositional state space reduction and can thus be applied when verifying interestingly large examples. Interface generation has till now been largely a manual process, were experts in the system or protocol to be verified describe the interface. This can lead to errors appearing in the verification process unless overheads to check the correctness of the interface are carried out. We address this issue by looking at automatic generation of interfaces, which by the very nature of their construction can be guaranteed to be correct. This report outlines preliminary experiments carried out on automatic techniques for interface generation together with their proofs of correctness.peer-reviewe

    Automatic interface generation for compositional verification

    Get PDF
    Compositional verification, the incremental generation and composition of the state graphs of individual processes to produce the global state graph, tries to address the state explosion problem for systems of communicating processes. The main problem with this approach is that intermediate state graphs are sometimes larger than the overall global system. To overcome this problem, interfaces, and refined interfaces, which take into account a system’s environment have been developed. The number of states of these interfaces plays a vital role in their applicability in terms of computational complexity, which is proportional to the number of states in the interface. The direct use of complete subcomponents of the global system as interfaces, thus usually fails, and it is up to the system designer to describe smaller interfaces to be used in the reduction. To avoid having to verify the correctness of such manually generated interfaces, we propose automatic techniques to generate correct interfaces. The challenge is to produce interfaces small in size, yet effective for reduction. In this paper, we present techniques to structurally produce language over-approximations of labelled transition systems which can be used as correct interfaces, and combine them with refined interfaces. The techniques are applied to a number of case-studies, analysing the trade-off between interface size and effectiveness.peer-reviewe

    Model checking user interfaces

    Get PDF
    User interfaces are crucial for the success of most software projects. As software grows in complexity there is a similar growth in the user interface com- plexity which leads to bugs which may be difficult to find by means of testing. In this paper we use the method of automated model checking to verify user interfaces with respect to a formal specification. We present an algorithm for the automated abstraction of the user interface model of a given system, which uses asynchronous and interleaving composition of a number of programs. This technique was successful at verifying the user interface of case study and brings us one step forward towards push button verification.peer-reviewe
    • …
    corecore