4,614 research outputs found

    Singleton-Optimal LRCs and Perfect LRCs via Cyclic and Constacyclic Codes

    Full text link
    Locally repairable codes (LRCs) have emerged as an important coding scheme in distributed storage systems (DSSs) with relatively low repair cost by accessing fewer non-failure nodes. Theoretical bounds and optimal constructions of LRCs have been widely investigated. Optimal LRCs via cyclic and constacyclic codes provide significant benefit of elegant algebraic structure and efficient encoding procedure. In this paper, we continue to consider the constructions of optimal LRCs via cyclic and constacyclic codes with long code length. Specifically, we first obtain two classes of qq-ary cyclic Singleton-optimal (n,k,d=6;r=2)(n, k, d=6;r=2)-LRCs with length n=3(q+1)n=3(q+1) when 3∣(q−1)3 \mid (q-1) and qq is even, and length n=32(q+1)n=\frac{3}{2}(q+1) when 3∣(q−1)3 \mid (q-1) and q≡1( mod  4)q \equiv 1(\bmod~4), respectively. To the best of our knowledge, this is the first construction of qq-ary cyclic Singleton-optimal LRCs with length n>q+1n>q+1 and minimum distance d≥5d \geq 5. On the other hand, an LRC acheiving the Hamming-type bound is called a perfect LRC. By using cyclic and constacyclic codes, we construct two new families of qq-ary perfect LRCs with length n=qm−1q−1n=\frac{q^m-1}{q-1}, minimum distance d=5d=5 and locality r=2r=2

    Solving multiple-criteria R&D project selection problems with a data-driven evidential reasoning rule

    Full text link
    In this paper, a likelihood based evidence acquisition approach is proposed to acquire evidence from experts'assessments as recorded in historical datasets. Then a data-driven evidential reasoning rule based model is introduced to R&D project selection process by combining multiple pieces of evidence with different weights and reliabilities. As a result, the total belief degrees and the overall performance can be generated for ranking and selecting projects. Finally, a case study on the R&D project selection for the National Science Foundation of China is conducted to show the effectiveness of the proposed model. The data-driven evidential reasoning rule based model for project evaluation and selection (1) utilizes experimental data to represent experts' assessments by using belief distributions over the set of final funding outcomes, and through this historic statistics it helps experts and applicants to understand the funding probability to a given assessment grade, (2) implies the mapping relationships between the evaluation grades and the final funding outcomes by using historical data, and (3) provides a way to make fair decisions by taking experts' reliabilities into account. In the data-driven evidential reasoning rule based model, experts play different roles in accordance with their reliabilities which are determined by their previous review track records, and the selection process is made interpretable and fairer. The newly proposed model reduces the time-consuming panel review work for both managers and experts, and significantly improves the efficiency and quality of project selection process. Although the model is demonstrated for project selection in the NSFC, it can be generalized to other funding agencies or industries.Comment: 20 pages, forthcoming in International Journal of Project Management (2019

    Bounds and Constructions of Singleton-Optimal Locally Repairable Codes with Small Localities

    Full text link
    Constructions of optimal locally repairable codes (LRCs) achieving Singleton-type bound have been exhaustively investigated in recent years. In this paper, we consider new bounds and constructions of Singleton-optimal LRCs with minmum distance d=6d=6, locality r=3r=3 and minimum distance d=7d=7 and locality r=2r=2, respectively. Firstly, we establish equivalent connections between the existence of these two families of LRCs and the existence of some subsets of lines in the projective space with certain properties. Then, we employ the line-point incidence matrix and Johnson bounds for constant weight codes to derive new improved bounds on the code length, which are tighter than known results. Finally, by using some techniques of finite field and finite geometry, we give some new constructions of Singleton-optimal LRCs, which have larger length than previous ones

    The Key Successful Factors of Internet Business: The Study of Online Bookshop

    Get PDF
    Electronic commerce is viewed as a more and more important issue for the rapid growth of online commercial activities. The books, having the properties of numerous categories, low unit price, and the convenience of delivering, have become the major products on line. So the online bookshops are appropriate for study to find out the key successful factors of Internet business. We first conduct twice Delphi to confirm several factors that are important to success in Internet business and there are 32 factors to be chosen. We then calculate the relative weigh of each factor with the Analytic Hierarchy Process (AHP) and select 14 factors having the highest weight to be the key successful factors of Internet business. These 14 factors in order of weight include the ability of managing the business change, filling the Place with Entrepreneurs and growing with them, the ability of managing the customer relationship, targeting the right customers, the price can react to market quickly, building the knowledge management systems, excellent sever ice after payment, building distribution center to develop unbeatable logistics, the ability of managing the cost, offering Great Value, the ability of marketing by database, building the goodwill and brand image, getting the trust of virtual community and maintain it continually, and the ability of developing the technology

    Coronary Computed Tomography Angiography—A Promising Imaging Modality in Diagnosing Coronary Artery Disease

    Get PDF
    BackgroundTraditionally, information on coronary artery lesions is obtained from invasive coronary angiography (CAG). The clinical applicability and diagnostic performance of the newly developed 64-slice multislice computed tomography (MSCT) scanner in coronary angiographic evaluation is not well evaluated.MethodsCoronary computed tomography angiography (CCTA) was performed in 345 patients (119 women, 226 men; mean age, 59.64 ±11.67 years). Concomitant CAG was performed in 53 patients. The diagnostic performance of CCTA for detecting significant lesions was compared with that of CAG by 3 independent cardiologists.ResultsAll CCTA was performed without complication. Comparison between CCTA and CAG was made in the 53 patients who underwent both studies. Sensitivity, specificity and the positive and negative predictive values for the 53 patients were: 81%, 99%, 87% and 99%, respectively.ConclusionThe 64-slice MSCT, developed in recent years, allows reliable noninvasive evaluation of coronary artery morphology, including plaque, stenosis and congenital anomaly. The diagnostic accuracy of MSCT scans for detecting lesions makes it a good imaging substitute for CAG in the evaluation of these coronary segments. [J Chin Med Assoc 2008;71(5):241–246

    GIFD: A Generative Gradient Inversion Method with Feature Domain Optimization

    Full text link
    Federated Learning (FL) has recently emerged as a promising distributed machine learning framework to preserve clients' privacy, by allowing multiple clients to upload the gradients calculated from their local data to a central server. Recent studies find that the exchanged gradients also take the risk of privacy leakage, e.g., an attacker can invert the shared gradients and recover sensitive data against an FL system by leveraging pre-trained generative adversarial networks (GAN) as prior knowledge. However, performing gradient inversion attacks in the latent space of the GAN model limits their expression ability and generalizability. To tackle these challenges, we propose \textbf{G}radient \textbf{I}nversion over \textbf{F}eature \textbf{D}omains (GIFD), which disassembles the GAN model and searches the feature domains of the intermediate layers. Instead of optimizing only over the initial latent code, we progressively change the optimized layer, from the initial latent space to intermediate layers closer to the output images. In addition, we design a regularizer to avoid unreal image generation by adding a small l1{l_1} ball constraint to the searching range. We also extend GIFD to the out-of-distribution (OOD) setting, which weakens the assumption that the training sets of GANs and FL tasks obey the same data distribution. Extensive experiments demonstrate that our method can achieve pixel-level reconstruction and is superior to the existing methods. Notably, GIFD also shows great generalizability under different defense strategy settings and batch sizes.Comment: ICCV 202
    • …
    corecore