120 research outputs found

    Synthesis, characterization and ethylene polymerization behaviour of binuclear nickel halides bearing 4,5,9,10-tetra(arylimino)pyrenylidenes

    Get PDF
    Pyrene-4,5,9,10-tetraone was prepared via the oxidation of pyrene, and reacted with various anilines to afford a series of 4,5,9,10-tetra(arylimino)pyrenylidene derivatives (L1–L4). The tetraimino-pyrene compounds L1 and L2 were reacted with two equivalents of (DME)NiBr₂ in CH₂Cl₂ to afford the corresponding dinickel bromide complexes (Ni1 and Ni2). The organic compounds were fully characterized, whilst the bi-metallic complexes were characterized by FT-IR spectra and elemental analysis. The molecular structures of representative organic and nickel compounds were confirmed by single-crystal X-ray diffraction studies. These nickel complexes exhibited high activities towards ethylene polymerization in the presence of either MAO or Me₂AlCl, maintaining a high activity over a prolonged period (longer than previously reported dinickel complex pre-catalysts). The polyethylene obtained was characterized by GPC, DSC and FT-IR spectroscopy and was found to possess branched features

    Biphenyl-bridged 6-(1-aryliminoethyl)-2-iminopyridyl-cobalt complexes: synthesis, characterization and ethylene polymerization behavior

    Get PDF
    A series of biphenyl-bridged 6-(1-aryliminoethyl)-2-iminopyridine derivatives reacted with cobalt dichloride in dichloromethane/ethanol to afford the corresponding binuclear cobalt complexes. The cobalt complexes were characterized by FT-IR spectroscopy and elemental analysis, and the structure of a representative complex was confirmed by single-crystal X-ray diffraction. Upon activation with either MAO or MMAO, these cobalt complexes performed with high activities of up to 1.2 × 10⁷ g (mol of Co)⁻¹ h⁻¹ in ethylene polymerization, which represents one of the most active cobalt-based catalytic systems in ethylene reactivity. These biphenyl-bridged bis(imino)pyridylcobalt precatalysts exhibited higher activities than did their mononuclear bis(imino)pyridylcobalt precatalyst counterparts, and more importantly, the binuclear precatalysts revealed a better thermal stability and longer lifetimes. The polyethylenes obtained were characterized by GPC, DSC, and high-temperature NMR spectroscopy and mostly possessed unimodal and highly linear features

    Synthesis and characterization of 2-(2-benzhydrylnaphthyliminomethyl)pyridylnickel halides: formation of branched polyethylene

    Get PDF
    A series of 2-(2-benzhydrylnaphthyliminomethyl)pyridine derivatives (L1–L3) was prepared and used to synthesize the corresponding bis-ligated nickel(II) halide complexes (Ni1–Ni6) in good yield. The molecular structures of representative complexes, namely the bromide Ni3 and the chloride complex Ni6, were confirmed by single crystal X-ray diffraction, and revealed a distorted octahedral geometry at nickel. Upon activation with either methylaluminoxane (MAO) or modified methylaluminoxane (MMAO), all nickel complex pre-catalysts exhibited high activities (up to 2.02 × 10⁷ g(PE) mol⁻¹(Ni) h⁻¹) towards ethylene polymerization, producing branched polyethylene of low molecular weight and narrow polydispersity. The influence of the reaction parameters and the nature of the ligands on the catalytic behavior of the title nickel complexes were investigated

    2-(1-(2-Benzhydrylnaphthylimino)ethyl)pyridylnickel halides: Synthesis, characterization, and ethylene polymerization behavior

    Get PDF
    A series of 2-(1-(2-benzhydrylnaphthylimino)ethyl)pyridine derivatives (L1–L3) was synthesized and fully characterized. The organic compounds acted as bi-dentate ligands on reacting with nickel halides to afford two kinds of nickel complexes, either mononuclear bis-ligated L₂NiBr₂ (Ni1–Ni3) or chloro-bridged dinuclear L₂Ni₂Cl₄ (Ni4–Ni6) complexes. The nickel complexes were fully characterized, and the single crystal X-ray diffraction revealed for Ni2, a distorted square pyramidal geometry at nickel comprising four nitrogens of two ligands and one bromide; whereas for Ni4, a centrosymmetric dimer possessing a distorted octahedral geometry at nickel was formed by two nitrogens of one ligand, two bridging chlorides and one terminal chloride along with oxygen from methanol (solvent). When activated with diethylaluminium chloride (Et₂AlCl), all nickel complexes performed with high activities (up to 1.22 × 10⁷ g (PE) mol⁻¹(Ni) h⁻¹) towards ethylene polymerization; the obtained polyethylene possessed high branching, low molecular weight and narrow polydispersity, suggestive of a single-site active species. The effect of the polymerization parameters, including the nature of the ligands/halides on the catalytic performance is discussed

    Seeing and Hearing: Open-domain Visual-Audio Generation with Diffusion Latent Aligners

    Full text link
    Video and audio content creation serves as the core technique for the movie industry and professional users. Recently, existing diffusion-based methods tackle video and audio generation separately, which hinders the technique transfer from academia to industry. In this work, we aim at filling the gap, with a carefully designed optimization-based framework for cross-visual-audio and joint-visual-audio generation. We observe the powerful generation ability of off-the-shelf video or audio generation models. Thus, instead of training the giant models from scratch, we propose to bridge the existing strong models with a shared latent representation space. Specifically, we propose a multimodality latent aligner with the pre-trained ImageBind model. Our latent aligner shares a similar core as the classifier guidance that guides the diffusion denoising process during inference time. Through carefully designed optimization strategy and loss functions, we show the superior performance of our method on joint video-audio generation, visual-steered audio generation, and audio-steered visual generation tasks. The project website can be found at https://yzxing87.github.io/Seeing-and-Hearing/Comment: Accepted to CVPR 2024. Project website: https://yzxing87.github.io/Seeing-and-Hearing

    LLMs Meet Multimodal Generation and Editing: A Survey

    Full text link
    With the recent advancement in large language models (LLMs), there is a growing interest in combining LLMs with multimodal learning. Previous surveys of multimodal large language models (MLLMs) mainly focus on multimodal understanding. This survey elaborates on multimodal generation and editing across various domains, comprising image, video, 3D, and audio. Specifically, we summarize the notable advancements with milestone works in these fields and categorize these studies into LLM-based and CLIP/T5-based methods. Then, we summarize the various roles of LLMs in multimodal generation and exhaustively investigate the critical technical components behind these methods and the multimodal datasets utilized in these studies. Additionally, we dig into tool-augmented multimodal agents that can leverage existing generative models for human-computer interaction. Lastly, we discuss the advancements in the generative AI safety field, investigate emerging applications, and discuss future prospects. Our work provides a systematic and insightful overview of multimodal generation and processing, which is expected to advance the development of Artificial Intelligence for Generative Content (AIGC) and world models. A curated list of all related papers can be found at https://github.com/YingqingHe/Awesome-LLMs-meet-Multimodal-GenerationComment: 52 Pages with 16 Figures, 12 Tables, and 545 References. GitHub Repository at: https://github.com/YingqingHe/Awesome-LLMs-meet-Multimodal-Generatio

    Animate-A-Story: Storytelling with Retrieval-Augmented Video Generation

    Full text link
    Generating videos for visual storytelling can be a tedious and complex process that typically requires either live-action filming or graphics animation rendering. To bypass these challenges, our key idea is to utilize the abundance of existing video clips and synthesize a coherent storytelling video by customizing their appearances. We achieve this by developing a framework comprised of two functional modules: (i) Motion Structure Retrieval, which provides video candidates with desired scene or motion context described by query texts, and (ii) Structure-Guided Text-to-Video Synthesis, which generates plot-aligned videos under the guidance of motion structure and text prompts. For the first module, we leverage an off-the-shelf video retrieval system and extract video depths as motion structure. For the second module, we propose a controllable video generation model that offers flexible controls over structure and characters. The videos are synthesized by following the structural guidance and appearance instruction. To ensure visual consistency across clips, we propose an effective concept personalization approach, which allows the specification of the desired character identities through text prompts. Extensive experiments demonstrate that our approach exhibits significant advantages over various existing baselines.Comment: Github: https://github.com/VideoCrafter/Animate-A-Story Project page: https://videocrafter.github.io/Animate-A-Stor

    Pathway to Future Symbiotic Creativity

    Full text link
    This report presents a comprehensive view of our vision on the development path of the human-machine symbiotic art creation. We propose a classification of the creative system with a hierarchy of 5 classes, showing the pathway of creativity evolving from a mimic-human artist (Turing Artists) to a Machine artist in its own right. We begin with an overview of the limitations of the Turing Artists then focus on the top two-level systems, Machine Artists, emphasizing machine-human communication in art creation. In art creation, it is necessary for machines to understand humans' mental states, including desires, appreciation, and emotions, humans also need to understand machines' creative capabilities and limitations. The rapid development of immersive environment and further evolution into the new concept of metaverse enable symbiotic art creation through unprecedented flexibility of bi-directional communication between artists and art manifestation environments. By examining the latest sensor and XR technologies, we illustrate the novel way for art data collection to constitute the base of a new form of human-machine bidirectional communication and understanding in art creation. Based on such communication and understanding mechanisms, we propose a novel framework for building future Machine artists, which comes with the philosophy that a human-compatible AI system should be based on the "human-in-the-loop" principle rather than the traditional "end-to-end" dogma. By proposing a new form of inverse reinforcement learning model, we outline the platform design of machine artists, demonstrate its functions and showcase some examples of technologies we have developed. We also provide a systematic exposition of the ecosystem for AI-based symbiotic art form and community with an economic model built on NFT technology. Ethical issues for the development of machine artists are also discussed

    VideoCrafter1: Open Diffusion Models for High-Quality Video Generation

    Full text link
    Video generation has increasingly gained interest in both academia and industry. Although commercial tools can generate plausible videos, there is a limited number of open-source models available for researchers and engineers. In this work, we introduce two diffusion models for high-quality video generation, namely text-to-video (T2V) and image-to-video (I2V) models. T2V models synthesize a video based on a given text input, while I2V models incorporate an additional image input. Our proposed T2V model can generate realistic and cinematic-quality videos with a resolution of 1024×5761024 \times 576, outperforming other open-source T2V models in terms of quality. The I2V model is designed to produce videos that strictly adhere to the content of the provided reference image, preserving its content, structure, and style. This model is the first open-source I2V foundation model capable of transforming a given image into a video clip while maintaining content preservation constraints. We believe that these open-source video generation models will contribute significantly to the technological advancements within the community.Comment: Tech Report; Github: https://github.com/AILab-CVC/VideoCrafter Homepage: https://ailab-cvc.github.io/videocrafter

    Recent Advances and New Perspectives in Surgery of Renal Cell Carcinoma

    Get PDF
    Renal cell carcinoma (RCC) is one of the most common types of cancer in the urogenital system. For localized renal cell carcinoma, nephron-sparing surgery (NSS) is becoming the optimal choice because of its advantage in preserving renal function. Traditionally, partial nephrectomy is performed with renal pedicle clamping to decrease blood loss. Furthermore, both renal pedicle clamping and the subsequent warm renal ischemia time affect renal function and increase the risk of postoperative renal failure. More recently, there has also been increasing interest in creating surgical methods to meet the requirements of nephron preservation and shorten the renal warm ischemia time including assisted or unassisted zero-ischemia surgery. As artificial intelligence increasingly integrates with surgery, the three-dimensional visualization technology of renal vasculature is applied in the NSS to guide surgeons. In addition, the renal carcinoma complexity scoring system is also constantly updated to guide clinicians in the selection of appropriate treatments for patients individually. In this article, we provide an overview of recent advances and new perspectives in NSS
    corecore