69 research outputs found

    Polynomial Fourier decay and a cocycle version of Dolgopyat's method for self conformal measures

    Full text link
    We show that every self conformal measure with respect to a C2(R)C^2 (\mathbb{R}) IFS Φ\Phi has polynomial Fourier decay under some mild and natural non-linearity conditions. In particular, every such measure has polynomial decay if Φ\Phi is Cω(R)C^\omega (\mathbb{R}) and contains a non-affine map. A key ingredient in our argument is a cocycle version of Dolgopyat's method, that does not require the cylinder covering of the attractor to be a Markov partition. It is used to obtain spectral gap-type estimates for the transfer operator, which in turn imply a renewal theorem with an exponential error term in the spirit of Li (2022)

    PointGMM: a Neural GMM Network for Point Clouds

    Full text link
    Point clouds are a popular representation for 3D shapes. However, they encode a particular sampling without accounting for shape priors or non-local information. We advocate for the use of a hierarchical Gaussian mixture model (hGMM), which is a compact, adaptive and lightweight representation that probabilistically defines the underlying 3D surface. We present PointGMM, a neural network that learns to generate hGMMs which are characteristic of the shape class, and also coincide with the input point cloud. PointGMM is trained over a collection of shapes to learn a class-specific prior. The hierarchical representation has two main advantages: (i) coarse-to-fine learning, which avoids converging to poor local-minima; and (ii) (an unsupervised) consistent partitioning of the input shape. We show that as a generative model, PointGMM learns a meaningful latent space which enables generating consistent interpolations between existing shapes, as well as synthesizing novel shapes. We also present a novel framework for rigid registration using PointGMM, that learns to disentangle orientation from structure of an input shape.Comment: CVPR 2020 -- final versio

    AnyLens: A Generative Diffusion Model with Any Rendering Lens

    Full text link
    State-of-the-art diffusion models can generate highly realistic images based on various conditioning like text, segmentation, and depth. However, an essential aspect often overlooked is the specific camera geometry used during image capture. The influence of different optical systems on the final scene appearance is frequently overlooked. This study introduces a framework that intimately integrates a text-to-image diffusion model with the particular lens geometry used in image rendering. Our method is based on a per-pixel coordinate conditioning method, enabling the control over the rendering geometry. Notably, we demonstrate the manipulation of curvature properties, achieving diverse visual effects, such as fish-eye, panoramic views, and spherical texturing using a single diffusion model

    SENS: Sketch-based Implicit Neural Shape Modeling

    Full text link
    We present SENS, a novel method for generating and editing 3D models from hand-drawn sketches, including those of an abstract nature. Our method allows users to quickly and easily sketch a shape, and then maps the sketch into the latent space of a part-aware neural implicit shape architecture. SENS analyzes the sketch and encodes its parts into ViT patch encoding, then feeds them into a transformer decoder that converts them to shape embeddings, suitable for editing 3D neural implicit shapes. SENS not only provides intuitive sketch-based generation and editing, but also excels in capturing the intent of the user's sketch to generate a variety of novel and expressive 3D shapes, even from abstract sketches. We demonstrate the effectiveness of our model compared to the state-of-the-art using objective metric evaluation criteria and a decisive user study, both indicating strong performance on sketches with a medium level of abstraction. Furthermore, we showcase its intuitive sketch-based shape editing capabilities.Comment: 18 pages, 18 figure

    Prompt-to-Prompt Image Editing with Cross Attention Control

    Full text link
    Recent large-scale text-driven synthesis models have attracted much attention thanks to their remarkable capabilities of generating highly diverse images that follow given text prompts. Such text-based synthesis methods are particularly appealing to humans who are used to verbally describe their intent. Therefore, it is only natural to extend the text-driven image synthesis to text-driven image editing. Editing is challenging for these generative models, since an innate property of an editing technique is to preserve most of the original image, while in the text-based models, even a small modification of the text prompt often leads to a completely different outcome. State-of-the-art methods mitigate this by requiring the users to provide a spatial mask to localize the edit, hence, ignoring the original structure and content within the masked region. In this paper, we pursue an intuitive prompt-to-prompt editing framework, where the edits are controlled by text only. To this end, we analyze a text-conditioned model in depth and observe that the cross-attention layers are the key to controlling the relation between the spatial layout of the image to each word in the prompt. With this observation, we present several applications which monitor the image synthesis by editing the textual prompt only. This includes localized editing by replacing a word, global editing by adding a specification, and even delicately controlling the extent to which a word is reflected in the image. We present our results over diverse images and prompts, demonstrating high-quality synthesis and fidelity to the edited prompts

    Angiographic evidence for reduced graft patency due to competitive flow in composite arterial T-grafts

    Get PDF
    ObjectiveComposite arterial grafting causes splitting of internal thoracic artery flow to various myocardial regions. The amount of flow supplying each region depends on the severity of coronary stenosis. Competitive flow in the native coronary artery can cause occlusion or severe narrowing of the internal thoracic artery supplying this coronary vessel.MethodsTwo hundred three consecutive postoperative coronary angiographies of 163 patients who underwent bilateral internal thoracic artery grafting using the composite-T-graft technique were analyzed. Angiographies were done in symptomatic patients or in patients with positive thalium scan between 2 and 102 months after surgery and were compared with preoperative angiograms.ResultsIn 123 patients, both internal thoracic arteries were patent. The remaining 40 control patients had at least 1 nonfunctioning internal thoracic artery. A lower stenosis rate in the left anterior and circumflex arteries was associated with higher occlusion rate of the left internal thoracic artery (P < .005) and the right internal thoracic artery (P < .005), respectively. In 19 angiograms of 18 patients, graft failure could be related to competitive flow. This included 7 patients with disease of the left main artery and a preoperative stenosis degree ranging between 50% and 80%, 8 patients with moderate stenosis (70% or less) of the circumflex artery, and 3 with moderate stenosis of the left anterior descending artery. Three of the patients with disease of the left main artery, 2 of the patients with competitive flow in the circumflex artery, and all patients in the subgroup with left anterior descending arterial disease underwent percutaneous or surgical reintervention.ConclusionThe composite T-graft technique of bilateral internal thoracic artery grafting should be reserved for patients with severe (70% or more) left anterior descending and circumflex arterial stenosis
    • …
    corecore