2,560 research outputs found

    Exploration of Mode of Software Service Outsourcing Talents Training Based on School-Enterprise Cooperation and Engineering Education

    Get PDF
    With the rapid development of international service outsourcing industry, the supply and demand uncoordinated problem between talent training modes and service outsourcing enterprises arises. In order to adapt to the knowledge-based economy and social development, our university has done some exploration work in IT service outsourcing talent training with some methods and measures taken, which the culture results obtained aims at providing a reference information for the high-quality IT service outsourcing personnel training and educational reform

    A Task Allocation Algorithm with Weighted Average Velocity Based on Online Active Period

    Get PDF
    In some complex scientific calculation, the resources of the calculation are very large. To a certain extent, the improvement of the computer level has met the needs of many computing, but a lot of more complex calculation cannot still be effectively resolved. Volunteer computing is a computational method that divides the complexity of computing tasks into simple subtasks, and collects the results of volunteer computing resources to solve the subtasks. In this calculation process, the task assignment module is an extremely important part of the whole computing platform. Many of the existing task allocation algorithms (TAA) are used to group by the similar conditions of the volunteer computer. TAA used in this work grouped by the computers with similar online active period, and the computation efficiency is improved by using the weighted average velocity as a parameter. The experimental results showed that TAA with the weighted average velocity based on online active period can effectively improve the performance of the volunteer computing platform. Keywords: Volunteer computing; Task allocation algorithm; Weighted average velocity; Online active perio

    A Dynamic Task Allocation Algorithm Based on Weighted Velocity

    Get PDF
    Volunteer computing is a way of supporting people around the world who provide free computer resources, to participate in scientific calculation or data analysis on the Internet. This provides an effective solution to solve the problems of large scale of basic scientific computing and more computing resources requirements. Task allocation is a very important part of volunteer computing. An effective algorithm can significantly improve computational efficiency. At present, most of the existing tasks are divided in term of the computer hardware conditions or the initial state of the computer in the volunteer computing. It seems that this have no obvious impact to calculating efficiency in a short time, but this task will be less flexible when idle resources of the volunteer computing becomes less or more. To make full use of idle computer resources, a dynamic task allocation algorithm (TAA) based on weighted velocity was proposed in this work. The research results showed that the weighted velocity as a parameter can be used to test the computing performance of a computer, dynamically manage task allocation as well. Keywords: volunteer computing, task allocation, weighted average velocit

    High and Increasing Oxa-51 DNA Load Predict Mortality in Acinetobacter baumannii Bacteremia: Implication for Pathogenesis and Evaluation of Therapy

    Get PDF
    BACKGROUND: While quantification of viral loads has been successfully employed in clinical medicine and has provided valuable insights and useful markers for several viral diseases, the potential of measuring bacterial DNA load to predict outcome or monitor therapeutic responses remains largely unexplored. We tested this possibility by investigating bacterial loads in Acinetobacter baumannii bacteremia, a rapidly increasing nosocomial infection characterized by high mortality, drug resistance, multiple and complicated risk factors, all of which urged the need of good markers to evaluate therapeutics. METHODS AND FINDINGS: We established a quantitative real-time PCR assay based on an A. baumannii-specific gene, Oxa-51, and conducted a prospective study to examine A. baumannii loads in 318 sequential blood samples from 51 adults patients (17 survivors, 34 nonsurvivors) with culture-proven A. baumannii bacteremia in the intensive care units. Oxa-51 DNA loads were significantly higher in the nonsurvivors than survivors on day 1, 2 and 3 (P=0.03, 0.001 and 0.006, respectively). Compared with survivors, nonsurvivors had higher maximum Oxa-51 DNA load and a trend of increase from day 0 to day 3 (P<0.001), which together with Pitt bacteremia score were independent predictors for mortality by multivariate analysis (P=0.014 and 0.016, for maximum Oxa-51 DNA and change of Oxa-51 DNA, respectively). Kaplan-Meier analysis revealed significantly different survival curves in patients with different maximum Oxa-51 DNA and change of Oxa-51 DNA from day 0 to day 3. CONCLUSIONS: High Oxa-51 DNA load and its initial increase could predict mortality. Moreover, monitoring Oxa-51 DNA load in blood may provide direct parameters for evaluating new regimens against A. baumannii in future clinical studies

    SafeDiffuser: Safe Planning with Diffusion Probabilistic Models

    Full text link
    Diffusion model-based approaches have shown promise in data-driven planning, but there are no safety guarantees, thus making it hard to be applied for safety-critical applications. To address these challenges, we propose a new method, called SafeDiffuser, to ensure diffusion probabilistic models satisfy specifications by using a class of control barrier functions. The key idea of our approach is to embed the proposed finite-time diffusion invariance into the denoising diffusion procedure, which enables trustworthy diffusion data generation. Moreover, we demonstrate that our finite-time diffusion invariance method through generative models not only maintains generalization performance but also creates robustness in safe data generation. We test our method on a series of safe planning tasks, including maze path generation, legged robot locomotion, and 3D space manipulation, with results showing the advantages of robustness and guarantees over vanilla diffusion models.Comment: 19 pages, website: https://safediffuser.github.io/safediffuser

    On-Device Training Under 256KB Memory

    Full text link
    On-device training enables the model to adapt to new data collected from the sensors by fine-tuning a pre-trained model. Users can benefit from customized AI models without having to transfer the data to the cloud, protecting the privacy. However, the training memory consumption is prohibitive for IoT devices that have tiny memory resources. We propose an algorithm-system co-design framework to make on-device training possible with only 256KB of memory. On-device training faces two unique challenges: (1) the quantized graphs of neural networks are hard to optimize due to low bit-precision and the lack of normalization; (2) the limited hardware resource does not allow full back-propagation. To cope with the optimization difficulty, we propose Quantization-Aware Scaling to calibrate the gradient scales and stabilize 8-bit quantized training. To reduce the memory footprint, we propose Sparse Update to skip the gradient computation of less important layers and sub-tensors. The algorithm innovation is implemented by a lightweight training system, Tiny Training Engine, which prunes the backward computation graph to support sparse updates and offload the runtime auto-differentiation to compile time. Our framework is the first solution to enable tiny on-device training of convolutional neural networks under 256KB SRAM and 1MB Flash without auxiliary memory, using less than 1/1000 of the memory of PyTorch and TensorFlow while matching the accuracy on tinyML application VWW. Our study enables IoT devices not only to perform inference but also to continuously adapt to new data for on-device lifelong learning. A video demo can be found here: https://youtu.be/0pUFZYdoMY8.Comment: NeurIPS 202

    Variation of Al species during water treatment:correlation with treatment efficiency under varied hydraulic conditions

    Get PDF
    The concentration of hydrolyzed coagulant ion species is a key factor in determining drinking water treatment efficiency. Direct correlation of water treatment efficiency with changes in species during coagulation has not been addressed. We investigated the correlation under different hydraulic conditions and water treatment efficiencies including changes in removal of turbidity, ultraviolet adsorption at 254 nm (UV254) and dissolved organic carbon (DOC). Results highlighted that Al species (monomeric species as Ala, medium polymeric species as Alb and colloidal species as Alc) behaved differently during coagulation and treatment efficiencies were affected. When varying the mixing speed, the removal of Alc species had a strong negative correlation with water treatment efficiency but under other hydraulic conditions positive correlations were found. The removal of Ala species was positively correlated with water treatment efficiency, but under other hydraulic conditions the low abundance of Ala species meant the correlation was difficult to observe. The Alb species were significantly and positively correlated with water treatment efficiency with the highest correlation coefficient (R2) of 0.87. The correlation of metallic species with removal efficiencies of the DOC and the UV254 produced higher R2 values. Correlation of the rate of removal of Alb species with the removal efficiencies of the DOC or the UV254 was better than for Alc. HIGHLIGHTS Hydrolyzed coagulant ion species is considered as one of key factors in determining drinking water treatment efficiency.; Drinking water treatment efficiency is often correlated with the distribution species in coagulant rather than water.; The variation of the species removal was investigated in the coagulation system.; Under varied hydraulic conditions a positive response in correlation was presented.

    Optimum dynamic characteristic control approach for building mass damper design

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/142538/1/eqe2995.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/142538/2/eqe2995_am.pd

    PockEngine: Sparse and Efficient Fine-tuning in a Pocket

    Full text link
    On-device learning and efficient fine-tuning enable continuous and privacy-preserving customization (e.g., locally fine-tuning large language models on personalized data). However, existing training frameworks are designed for cloud servers with powerful accelerators (e.g., GPUs, TPUs) and lack the optimizations for learning on the edge, which faces challenges of resource limitations and edge hardware diversity. We introduce PockEngine: a tiny, sparse and efficient engine to enable fine-tuning on various edge devices. PockEngine supports sparse backpropagation: it prunes the backward graph and sparsely updates the model with measured memory saving and latency reduction while maintaining the model quality. Secondly, PockEngine is compilation first: the entire training graph (including forward, backward and optimization steps) is derived at compile-time, which reduces the runtime overhead and brings opportunities for graph transformations. PockEngine also integrates a rich set of training graph optimizations, thus can further accelerate the training cost, including operator reordering and backend switching. PockEngine supports diverse applications, frontends and hardware backends: it flexibly compiles and tunes models defined in PyTorch/TensorFlow/Jax and deploys binaries to mobile CPU/GPU/DSPs. We evaluated PockEngine on both vision models and large language models. PockEngine achieves up to 15 ×\times speedup over off-the-shelf TensorFlow (Raspberry Pi), 5.6 ×\times memory saving back-propagation (Jetson AGX Orin). Remarkably, PockEngine enables fine-tuning LLaMav2-7B on NVIDIA Jetson AGX Orin at 550 tokens/s, 7.9×\times faster than the PyTorch
    • 

    corecore