1,571 research outputs found

    Trial Technique in Antitrust Cases

    Get PDF

    High-performance geometric vascular modelling

    Get PDF
    Image-based high-performance geometric vascular modelling and reconstruction is an essential component of computer-assisted surgery on the diagnosis, analysis and treatment of cardiovascular diseases. However, it is an extremely challenging task to efficiently reconstruct the accurate geometric structures of blood vessels out of medical images. For one thing, the shape of an individual section of a blood vessel is highly irregular because of the squeeze of other tissues and the deformation caused by vascular diseases. For another, a vascular system is a very complicated network of blood vessels with different types of branching structures. Although some existing vascular modelling techniques can reconstruct the geometric structure of a vascular system, they are either time-consuming or lacking sufficient accuracy. What is more, these techniques rarely consider the interior tissue of the vascular wall, which consists of complicated layered structures. As a result, it is necessary to develop a better vascular geometric modelling technique, which is not only of high performance and high accuracy in the reconstruction of vascular surfaces, but can also be used to model the interior tissue structures of the vascular walls.This research aims to develop a state-of-the-art patient-specific medical image-based geometric vascular modelling technique to solve the above problems. The main contributions of this research are:- Developed and proposed the Skeleton Marching technique to reconstruct the geometric structures of blood vessels with high performance and high accuracy. With the proposed technique, the highly complicated vascular reconstruction task is reduced to a set of simple localised geometric reconstruction tasks, which can be carried out in a parallel manner. These locally reconstructed vascular geometric segments are then combined together using shape-preserving blending operations to faithfully represent the geometric shape of the whole vascular system.- Developed and proposed the Thin Implicit Patch method to realistically model the interior geometric structures of the vascular tissues. This method allows the multi-layer interior tissue structures to be embedded inside the vascular wall to illustrate the geometric details of the blood vessel in real world

    Coherent multi-dimensional segmentation of multiview images using a variational framework and applications to image based rendering

    No full text
    Image Based Rendering (IBR) and in particular light field rendering has attracted a lot of attention for interpolating new viewpoints from a set of multiview images. New images of a scene are interpolated directly from nearby available ones, thus enabling a photorealistic rendering. Sampling theory for light fields has shown that exact geometric information in the scene is often unnecessary for rendering new views. Indeed, the band of the function is approximately limited and new views can be rendered using classical interpolation methods. However, IBR using undersampled light fields suffers from aliasing effects and is difficult particularly when the scene has large depth variations and occlusions. In order to deal with these cases, we study two approaches: New sampling schemes have recently emerged that are able to perfectly reconstruct certain classes of parametric signals that are not bandlimited but characterized by a finite number of parameters. In this context, we derive novel sampling schemes for piecewise sinusoidal and polynomial signals. In particular, we show that a piecewise sinusoidal signal with arbitrarily high frequencies can be exactly recovered given certain conditions. These results are applied to parametric multiview data that are not bandlimited. We also focus on the problem of extracting regions (or layers) in multiview images that can be individually rendered free of aliasing. The problem is posed in a multidimensional variational framework using region competition. In extension to previous methods, layers are considered as multi-dimensional hypervolumes. Therefore the segmentation is done jointly over all the images and coherence is imposed throughout the data. However, instead of propagating active hypersurfaces, we derive a semi-parametric methodology that takes into account the constraints imposed by the camera setup and the occlusion ordering. The resulting framework is a global multi-dimensional region competition that is consistent in all the images and efficiently handles occlusions. We show the validity of the approach with captured light fields. Other special effects such as augmented reality and disocclusion of hidden objects are also demonstrated

    English Legal System Shake-Up: Genuine Reform or Teapot Tempest?

    Get PDF

    Backdooring Textual Inversion for Concept Censorship

    Full text link
    Recent years have witnessed success in AIGC (AI Generated Content). People can make use of a pre-trained diffusion model to generate images of high quality or freely modify existing pictures with only prompts in nature language. More excitingly, the emerging personalization techniques make it feasible to create specific-desired images with only a few images as references. However, this induces severe threats if such advanced techniques are misused by malicious users, such as spreading fake news or defaming individual reputations. Thus, it is necessary to regulate personalization models (i.e., concept censorship) for their development and advancement. In this paper, we focus on the personalization technique dubbed Textual Inversion (TI), which is becoming prevailing for its lightweight nature and excellent performance. TI crafts the word embedding that contains detailed information about a specific object. Users can easily download the word embedding from public websites like Civitai and add it to their own stable diffusion model without fine-tuning for personalization. To achieve the concept censorship of a TI model, we propose leveraging the backdoor technique for good by injecting backdoors into the Textual Inversion embeddings. Briefly, we select some sensitive words as triggers during the training of TI, which will be censored for normal use. In the subsequent generation stage, if the triggers are combined with personalized embeddings as final prompts, the model will output a pre-defined target image rather than images including the desired malicious concept. To demonstrate the effectiveness of our approach, we conduct extensive experiments on Stable Diffusion, a prevailing open-sourced text-to-image model. Our code, data, and results are available at https://concept-censorship.github.io

    A perceptual approach for stereoscopic rendering optimization

    Get PDF
    Cataloged from PDF version of article.The traditional way of stereoscopic rendering requires rendering the scene for left and right eyes separately: which doubles the rendering complexity. In this study, we propose a perceptually-based approach for accelerating stereoscopic rendering. This optimization approach is based on the Binocular Suppression Theory, which claims that the overall percept of a stereo pair in a region is determined by the dominant image on the corresponding region. We investigate how binocular suppression mechanism of human visual system can be utilized for rendering optimization. Our aim is to identify the graphics rendering and modeling features that do not affect the overall quality of a stereo pair when simplified in one view. By combining the results of this investigation with the principles of visual attention, we infer that this optimization approach is feasible if the high quality view has more intensity contrast. For this reason, we performed a subjective experiment, in which various representative graphical methods were analyzed. The experimental results verified our hypothesis that a modification, applied on a single view, is not perceptible if it decreases the intensity contrast, and thus can be used for stereoscopic rendering. (C) 2009 Elsevier Ltd. All rights reserved

    The decline of infinitival complementation in Ancient Greek : a case of diachronic ambiguity resolution?

    Get PDF
    Several reasons have been proposed for the decline of infinitival complementation in Ancient Greek: the fact that the infinitive became morphologically restricted, the inherent redundancy of the Classical complementation system, and language contact. In this article, I explore yet another reason for the decline of the infinitive: I argue that the system of infinitival complementation became fundamentally ambiguous in its expression in later Greek. As has been noted previously, the loss of the future and perfect tense had a serious impact on the use of infinitival complementation. However, rather than there being an ‘omission’ of temporal distinctions, as previous studies have claimed, I argue that the present and aorist infinitive became polyfunctional, being used for anterior, simultaneous, and posterior events. Next to temporal ambiguity, a second type of ambiguity occurred: ‘modal’ ambiguity or ambiguity with regard to the speech function of the complement clause. Already in Classical times, the present and aorist infinitive could be used after certain verb classes to encode both ‘propositions’ and ‘proposals’ (offers/commands), an ambiguity which continues to be found in later Greek. The study is based on a corpus of documentary texts from the Roman and Byzantine periods (I – VIII AD)

    A software based mentor system

    Get PDF
    This thesis describes the architecture, implementation issues and evaluation of Mentor - an educational support system designed to mentor students in their university studies. Students can ask (by typing) natural language questions and Mentor will use several educational paradigms to present information from its Knowledge Base or from data-mined online Web sites to respond. Typically the questions focus on the student’s assignments or in their preparation for their examinations. Mentor is also pro-active in that it prompts the student with questions such as "Have you started your assignment yet?". If the student responds and enters into a dialogue with Mentor, then, based upon the student’s questions and answers, it guides them through a Directed Learning Path planned by the lecturer, specific to that assessment. The objectives of the research were to determine if such a system could be designed, developed and applied in a large-scale, real-world environment and to determine if the resulting system was beneficial to students using it. The study was significant in that it provided an analysis of the design and implementation of the system as well as a detailed evaluation of its use. This research integrated the Computer Science disciplines of network communication, natural language parsing, user interface design and software agents, together with pedagogies from the Computer Aided Instruction and Intelligent Tutoring System fields of Education. Collectively, these disciplines provide the foundation for the two main thesis research areas of Dialogue Management and Tutorial Dialogue Systems. The development and analysis of the Mentor System required the design and implementation of an easy to use text based interface as well as a hyper- and multi-media graphical user interface, a client-server system, and a dialogue management system based on an extensible kernel. The multi-user Java-based client-server system used Perl-5 Regular Expression pattern matching for Natural Language Parsing along with a state-based Dialogue Manager and a Knowledge Base marked up using the XML-based Virtual Human Markup Language. The kernel was also used in other Dialogue Management applications such as with computer generated Talking Heads. The system also enabled a user to easily program their own knowledge into the Knowledge Base as well as to program new information retrieval or management tasks so that the system could grow with the user. The overall framework to integrate and manage the above components into a usable system employed suitable educational pedagogies that helped in the student’s learning process. The thesis outlines the learning paradigms used in, and summarises the evaluation of, three course-based Case Studies of university students’ perception of the system to see how effective and useful it was, and whether students benefited from using it. This thesis will demonstrate that Mentor met its objectives and was very successful in helping students with their university studies. As one participant indicated: ‘I couldn’t have done without it.
    • …
    corecore