54 research outputs found

    Hierarchical Porous Metal-Organic Frameworks via Solvent Assisted Interpenetration and Linker Extraction

    Get PDF
    School of Molecular Sciences(Chemistry)Two different pillaring linkers, 4,4???- azopyridine (azopy) and bis(4-pyridyl) acetylene (bpa), were introduced to two dimensional sheets of [Ni(HBTC)(DMF)2] (where H3BTC = benzene-1,3,5-tricarboxylic acid and DMF = N,N-dimethylformamide) to form two isoreticular hms topological MOFs. Azopy was used to improve CO2 capturing ability since the interaction between the azo group and CO2 molecules had a potential for the amount of CO2 uptake. Bpa was selected to increase the stability of the structure due to the rigid alkyne group in the linker. Although bpa-hms [Ni(HBTC)(bpa)] had a larger porosity, azo-hms [Ni(HBTC)(azopy)] had a greater CO2 adsorption amount at the room temperature because of the amino group, interacting with CO2. For the pore size control, the hms structures were heated so to form interpenetrated structures (hms-c). Under the solvent-assisted environment, the neutral ligands were easily detached and then attached to the sheets. It was the first time to demonstrate the interpenetration in the complicated topology, besides pcu-net, through the post-synthetic heat treatment. After the interpenetration, thereby it was possible to tune the pore size, and enhance the stability of the frameworks. The reduced pore size also increased the interaction between the guest molecules and the frameworks. Meanwhile, to compensate the extremely reduced porosity of interpenetrated MOFs, the defect-engineering strategy was implemented. The neutral pillars in hms MOFs were able to be removed systematically under the thermal vacuum condition. The vacant sites formed the mesopores, and they deserved additional porosity. They can be utilized as active sites for chemical reactions and accelerating the mass transportation of guest molecules. Therefore, these hierarchical interpenetrated MOFs (hms-d) may be effectively applied to catalytic performance.ope

    Event-based Optical Flow Estimation via Multi-layer Representation

    Get PDF
    Graduate School of Artificial IntelligenceOptical flow estimation plays a crucial role in computer vision applications, but it faces challenges in handling factors such as changes in lighting conditions, which adversely affect its accuracy. In recent years, event cameras have emerged as promising sensors for optical flow estimation in challenging scenarios due to their high temporal resolution and ability to capture pixel-level brightness changes. However, event-based optical flow estimation still encounters limitations related to the sparse nature of event data. To address these limitations, this study proposes a multi-layer representation method for optical flow estimation. The multi-layer representation method divides the input data into multiple layers and estimates pixel-wise motion in each layer, enabling a clearer depiction of object motion trajectories and velocity variations. Additionally, object detection is incorporated as an auxiliary task to enhance motion estimation by focusing on event data around objects and leveraging boundary information. By leveraging the advantages of event cameras and the multi-layer representation method, this re- search aims to enhance event-based optical flow estimation. Experimental results demonstrate the effec- tiveness of the proposed approach in achieving more accurate and detailed optical flow estimation.clos

    DIFFERENCES OF POSTURE ON PUSH-OFF PHASE BETWEEN ACTUAL SPEED SKATING AND SLIDE-BOARD TRAINING

    Get PDF
    The slide-board training is a feasible technology to exercise skating during the off-season. While slide-board is much different from ice surface of the actual skating situation, it may distort actual skating posture. The purpose of this study was to analyze the differences in posture during push-off phase between an actual speed skating condition and on slideboard. The result showed that on the slide-board distance between two feet were shorter, so were the rotation angles of both feet, the hip angle was lower during the whole phase, while knee and ankle angles were higher. In conclusion, the restriction of the space on slide-board affected the position and rotation of both stable and push-off feet as well as the joint extension of the stable leg. Hence, the structural design of slide-board needs to be improved to facilitate the extension of knee and ankle in the medial-lateral direction

    Single crystalline hollow metal-organic frameworks: a metal-organic polyhedron single crystal as a sacrificial template

    Get PDF
    Single crystalline hollow metal-organic frameworks (MOFs) with cavity dimensions on the order of several micrometers and hundreds of micrometers were prepared using a metal-organic polyhedron single crystal as a sacrificial hard template. The hollow nature of the MOF crystal was confirmed by scanning electron microscopy of the crystal sliced using a focused ion beam.open2

    No Token Left Behind: Reliable KV Cache Compression via Importance-Aware Mixed Precision Quantization

    Full text link
    Key-Value (KV) Caching has become an essential technique for accelerating the inference speed and throughput of generative Large Language Models~(LLMs). However, the memory footprint of the KV cache poses a critical bottleneck in LLM deployment as the cache size grows with batch size and sequence length, often surpassing even the size of the model itself. Although recent methods were proposed to select and evict unimportant KV pairs from the cache to reduce memory consumption, the potential ramifications of eviction on the generative process are yet to be thoroughly examined. In this paper, we examine the detrimental impact of cache eviction and observe that unforeseen risks arise as the information contained in the KV pairs is exhaustively discarded, resulting in safety breaches, hallucinations, and context loss. Surprisingly, we find that preserving even a small amount of information contained in the evicted KV pairs via reduced precision quantization substantially recovers the incurred degradation. On the other hand, we observe that the important KV pairs must be kept at a relatively higher precision to safeguard the generation quality. Motivated by these observations, we propose \textit{Mixed-precision KV cache}~(MiKV), a reliable cache compression method that simultaneously preserves the context details by retaining the evicted KV pairs in low-precision and ensure generation quality by keeping the important KV pairs in high-precision. Experiments on diverse benchmarks and LLM backbones show that our proposed method offers a state-of-the-art trade-off between compression ratio and performance, compared to other baselines

    AlphaTuning: Quantization-Aware Parameter-Efficient Adaptation of Large-Scale Pre-Trained Language Models

    Full text link
    There are growing interests in adapting large-scale language models using parameter-efficient fine-tuning methods. However, accelerating the model itself and achieving better inference efficiency through model compression has not been thoroughly explored yet. Model compression could provide the benefits of reducing memory footprints, enabling low-precision computations, and ultimately achieving cost-effective inference. To combine parameter-efficient adaptation and model compression, we propose AlphaTuning consisting of post-training quantization of the pre-trained language model and fine-tuning only some parts of quantized parameters for a target task. Specifically, AlphaTuning works by employing binary-coding quantization, which factorizes the full-precision parameters into binary parameters and a separate set of scaling factors. During the adaptation phase, the binary values are frozen for all tasks, while the scaling factors are fine-tuned for the downstream task. We demonstrate that AlphaTuning, when applied to GPT-2 and OPT, performs competitively with full fine-tuning on a variety of downstream tasks while achieving >10x compression ratio under 4-bit quantization and >1,000x reduction in the number of trainable parameters.Comment: Findings of EMNLP 202

    Le teorie sociologiche sulla comunicazione di massa. Dieci lezioni

    Get PDF
    La communication research ha oramai guadagnato una propria autonomia scientifica e accademica, sostenuta dal riconoscimento della qualità e rilevanza sociale e culturale dell’oggetto di studio. Le comunicazioni di massa sono una realtà affluente della nuova era antropologica, che si manifesta in molteplici aspetti che incidono sulla riproduzione simbolica e materiale dei sistemi sociali. Di fronte all’emergenza di un fenomeno pervasivo e pluriforme, da circa un secolo, gli studiosi si pongono il problema di come darne conto in maniera adeguata. Il libro ricostruisce lo sviluppo dei differenti paradigmi che si sono affermati nel corso del Novecento, orientando i modelli teorici e le attività di ricerca sui media. INDICE 11 - Prefazione. Ciò che è vivo e ciò che è morto nella teoria della comunicazione del Novecento. Per una storiografia della teoria, i principali modelli e le principali scuole di Michele Infante; 31 - Introduzione; 37 - Capitolo I. Le prime riflessioni sugli effetti dei mass media; 63 - Capitolo II. La scoperta delle variabili intervenienti; 85 - Capitolo III. Le reti sociali e il “flusso a due fasi”; 121 - Capitolo IV. L’approccio degli usi e delle gratificazioni; 141 - Capitolo V. La teoria critica vs. l’industria culturale; 171 - Capitolo VI. I Cultural Studies; 197 - Capitolo VII. La teoria dell’agenda setting; 227 - Capitolo VIII. La teoria della spirale del silenzio; 239 - Capitolo IX. La teoria della coltivazione; 257 - Capitolo X. La teoria della dipendenza; 273 - Bibliografia

    Resolving Homonymy with Correlation Clustering in Scholarly Digital Libraries

    No full text
    As scholarly data increases rapidly, scholarly digital libraries, supplying publication data through convenient online interfaces, become popular and important tools for researchers. Researchers use SDLs for various purposes, including searching the publications of an author, assessing one’s impact by the citations, and identifying one’s research topics. However, common names among authors cause difficulties in correctly identifying one’s works among a large number of scholarly publications. Abbreviated first and middle names make it even harder to identify and distinguish authors with the same representation (i.e. spelling) of names. Several disambiguation methods have solved the problem under their own assumptions. The assumptions are usually that inputs such as the number of same-named authors, training sets, or rich and clear information about papers are given. Considering the size of scholarship records today and their inconsistent formats, we expect their assumptions be very hard to be met. We use common assumption that coauthors are likely to write more than one paper together and propose an unsupervised approach to group papers from the same author only using the most common information, author lists. We represent each paper as a point in an author name space, take dimension reduction to find author names shown frequently together in papers, and cluster papers with vector similarity measure well fitted for name disambiguation task. The main advantage of our approach is to use only coauthor information as input. We evaluate our method using publication records collected from DBLP, and show that our approach results in better disambiguation compared to other five clustering methods in terms of cluster purity and fragmentation
    corecore