5,556 research outputs found

    Mechanical and tribo-metallurgical behavior of 17-4 precipitation hardening stainless steel affected by severe cold plastic deformation: a comprehensive review article

    Get PDF
    This article comprehensively reviews the mechanical properties and tribo-metallurgical behavior of 17-4 precipitation hardening stainless steel (17-4PH SS) during and after cold plastic deformation. Referring to the scientific literature, stainless steels are one of the few types of ferrous alloys which could be appropriately set up through cold working processes in the forms of sheets or other shapes. Likewise, some other metal alloys such as mild low-carbon-based steels, copper and its alloys, aluminum alloys, and some others are the few types of metal alloys which have this capability. On the other hand, in engineering applications, there are several types of mechanical failures, which must be taken into account to investigate the mechanical behavior and tribo-metallurgical properties of any targeted materials. For example, corrosion resistance, wear resistance, and fatigue failure are investigated according to the microstructural studies, comprising of the grain size, grain boundaries, orientations, dislocations, and so on. Based on the published results, focusing on 17-4PH SS, one of the most main effective factors on mechanical and tribo-metallurgical performance is the grain size. Also, the favorable balance of two mechanical properties of strength and ductility has been reported as a dilemma in the materials science, and the problem delineates upon the limitations of numerous structural materials potentials. Following the failure analysis of the materials, in order to diminish the damages caused by fretting fatigue some methods such as ultrasonic processes are applied for the treatment of 17-4PH SS via changing the microstructure, residual stress, and other parameters. Also, through the other cold deformation technologies, the nanostructured surface layer with highly upgraded mechanical properties of several ultrasonic surface rolling process-treated 17-4PH SS has been obtained. To this end, such cold working processes on 17-4PH SS and their subsequent results are elaborated in this review paper. Graphical abstract: [Figure not available: see fulltext.

    Imbalance Learning and Its Application on Medical Datasets

    Get PDF
    To gain more valuable information from the increasing large amount of data, data mining has been a hot topic that attracts growing attention in this two decades. One of the challenges in data mining is imbalance learning, which refers to leaning from imbalanced datasets. The imbalanced datasets is dominated by some classes (majority) and other under-represented classes (minority). The imbalanced datasets degrade the learning ability of traditional methods, which are designed on the assumption that all classes are balanced and have equal misclassification costs, leading to the poor performance on the minority classes. This phenomenon is usually called the class imbalance problem. However, it is usually the minority classes of more interest and importance, such as sick cases in the medical dataset. Additionally, traditional methods are optimized to achieve maximum accuracy, which is not suitable for evaluating the performance on imbalanced datasets. From the view of data space, class imbalance could be classified as extrinsic imbalance and intrinsic imbalance. Extrinsic imbalance is caused by external factors, such as data transmission or data storage, while intrinsic imbalance means the dataset is inherently imbalanced due to its nature.  As extrinsic imbalance could be fixed by collecting more samples, this thesis mainly focus on on two scenarios of the intrinsic imbalance,  machine learning for imbalanced structured datasets and deep learning for imbalanced image datasets.  Normally, the solutions for the class imbalance problem are named as imbalance learning methods, which could be grouped into data-level methods (re-sampling), algorithm-level (re-weighting) methods and hybrid methods. Data-level methods modify the class distribution of the training dataset to create balanced training sets, and typical examples are over-sampling and under-sampling. Instead of modifying the data distribution, algorithm-level methods adjust the misclassification cost to alleviate the class imbalance problem, and one typical example is cost sensitive methods. Hybrid methods usually combine data-level methods and algorithm-level methods. However, existing imbalance learning methods encounter different kinds of problems. Over-sampling methods increase the minority samples to create balanced training sets, which might lead the trained model overfit to the minority class. Under-sampling methods create balanced training sets by discarding majority samples, which lead to the information loss and poor performance of the trained model. Cost-sensitive methods usually need assistance from domain expert to define the misclassification costs which are task specified. Thus, the generalization ability of cost-sensitive methods is poor. Especially, when it comes to the deep learning methods under class imbalance, re-sampling methods may introduce large computation cost and existing re-weighting methods could lead to poor performance. The object of this dissertation is to understand features difference under class imbalance, to improve the classification performance on structured datasets or image datasets. This thesis proposes two machine learning methods for imbalanced structured datasets and one deep learning method for imbalance image datasets. The proposed methods are evaluated on several medical datasets, which are intrinsically imbalanced.  Firstly, we study the feature difference between the majority class and the minority class of an imbalanced medical dataset, which is collected from a Chinese hospital. After data cleaning and structuring, we get 3292 kidney stone cases treated by Percutaneous Nephrolithonomy from 2012 to 2019. There are 651 (19.78% ) cases who have postoperative complications, which makes the complication prediction an imbalanced classification task. We propose a sampling-based method SMOTE-XGBoost and implement it to build a postoperative complication prediction model. Experimental results show that the proposed method outperforms classic machine learning methods. Furthermore, traditional prediction models of Percutaneous Nephrolithonomy are designed to predict the kidney stone status and overlook complication related features, which could degrade their prediction performance on complication prediction tasks. To this end, we merge more features into the proposed sampling-based method and further improve the classification performance. Overall, SMOTE-XGBoost achieves an AUC of 0.7077 which is 41.54% higher than that of S.T.O.N.E. nephrolithometry, a traditional prediction model of Percutaneous Nephrolithonomy. After reviewing the existing machine learning methods under class imbalance, we propose a novel ensemble learning approach called Multiple bAlance Subset Stacking (MASS). MASS first cuts the majority class into multiple subsets by the size of the minority set, and combines each majority subset with the minority set as one balanced subsets. In this way, MASS could overcome the problem of information loss because it does not discard any majority sample. Each balanced subset is used to train one base classifier. Then, the original dataset is feed to all the trained base classifiers, whose output are used to generate the stacking dataset. One stack model is trained by the staking dataset to get the optimal weights for the base classifiers. As the stacking dataset keeps the same labels as the original dataset, which could avoid the overfitting problem. Finally, we can get an ensembled strong model based on the trained base classifiers and the staking model. Extensive experimental results on three medical datasets show that MASS outperforms baseline methods.  The robustness of MASS is proved over implementing different base classifiers. We design a parallel version MASS to reduce the training time cost. The speedup analysis proves that Parallel MASS could reduce training time cost greatly when applied on large datasets. Specially, Parallel MASS reduces 101.8% training time compared with MASS at most in our experiments.  When it comes to the class imbalance problem of image datasets, existing imbalance learning methods suffer from the problem of large training cost and poor performance.  After introducing the problem of implementing resampling methods on image classification tasks, we demonstrate issues of re-weighting strategy using class frequencies through the experimental result on one medical image dataset.  We propose a novel re-weighting method Hardness Aware Dynamic loss to solve the class imbalance problem of image datasets. After each training epoch of deep neural networks, we compute the classification hardness of each class. We will assign higher class weights to the classes have large classification hardness values and vice versa in the next epoch. In this way, HAD could tune the weight of each sample in the loss function dynamically during the training process. The experimental results prove that HAD significantly outperforms the state-of-the-art methods. Moreover, HAD greatly improves the classification accuracies of minority classes while only making a small compromise of majority class accuracies. Especially, HAD loss improves 10.04% average precision compared with the best baseline, Focal loss, on the HAM10000 dataset. At last, I conclude this dissertation with our contributions to the imbalance learning, and provide an overview of potential directions for future research, which include extensions of the three proposed methods, development of task-specified algorithms, and fixing the challenges of within-class imbalance.2021-06-0

    Adaptive Robust Traffic Engineering in Software Defined Networks

    Full text link
    One of the key advantages of Software-Defined Networks (SDN) is the opportunity to integrate traffic engineering modules able to optimize network configuration according to traffic. Ideally, network should be dynamically reconfigured as traffic evolves, so as to achieve remarkable gains in the efficient use of resources with respect to traditional static approaches. Unfortunately, reconfigurations cannot be too frequent due to a number of reasons related to route stability, forwarding rules instantiation, individual flows dynamics, traffic monitoring overhead, etc. In this paper, we focus on the fundamental problem of deciding whether, when and how to reconfigure the network during traffic evolution. We propose a new approach to cluster relevant points in the multi-dimensional traffic space taking into account similarities in optimal routing and not only in traffic values. Moreover, to provide more flexibility to the online decisions on when applying a reconfiguration, we allow some overlap between clusters that can guarantee a good-quality routing regardless of the transition instant. We compare our algorithm with state-of-the-art approaches in realistic network scenarios. Results show that our method significantly reduces the number of reconfigurations with a negligible deviation of the network performance with respect to the continuous update of the network configuration.Comment: 10 pages, 8 figures, submitted to IFIP Networking 201

    SANNS: Scaling Up Secure Approximate k-Nearest Neighbors Search

    Get PDF
    The kk-Nearest Neighbor Search (kk-NNS) is the backbone of several cloud-based services such as recommender systems, face recognition, and database search on text and images. In these services, the client sends the query to the cloud server and receives the response in which case the query and response are revealed to the service provider. Such data disclosures are unacceptable in several scenarios due to the sensitivity of data and/or privacy laws. In this paper, we introduce SANNS, a system for secure kk-NNS that keeps client's query and the search result confidential. SANNS comprises two protocols: an optimized linear scan and a protocol based on a novel sublinear time clustering-based algorithm. We prove the security of both protocols in the standard semi-honest model. The protocols are built upon several state-of-the-art cryptographic primitives such as lattice-based additively homomorphic encryption, distributed oblivious RAM, and garbled circuits. We provide several contributions to each of these primitives which are applicable to other secure computation tasks. Both of our protocols rely on a new circuit for the approximate top-kk selection from nn numbers that is built from O(n+k2)O(n + k^2) comparators. We have implemented our proposed system and performed extensive experimental results on four datasets in two different computation environments, demonstrating more than 1831×18-31\times faster response time compared to optimally implemented protocols from the prior work. Moreover, SANNS is the first work that scales to the database of 10 million entries, pushing the limit by more than two orders of magnitude.Comment: 18 pages, to appear at USENIX Security Symposium 202

    Open Problems in (Hyper)Graph Decomposition

    Full text link
    Large networks are useful in a wide range of applications. Sometimes problem instances are composed of billions of entities. Decomposing and analyzing these structures helps us gain new insights about our surroundings. Even if the final application concerns a different problem (such as traversal, finding paths, trees, and flows), decomposing large graphs is often an important subproblem for complexity reduction or parallelization. This report is a summary of discussions that happened at Dagstuhl seminar 23331 on "Recent Trends in Graph Decomposition" and presents currently open problems and future directions in the area of (hyper)graph decomposition

    Gestion flexible des ressources dans les réseaux de nouvelle génération avec SDN

    Get PDF
    Abstract : 5G and beyond-5G/6G are expected to shape the future economic growth of multiple vertical industries by providing the network infrastructure required to enable innovation and new business models. They have the potential to offer a wide spectrum of services, namely higher data rates, ultra-low latency, and high reliability. To achieve their promises, 5G and beyond-5G/6G rely on software-defined networking (SDN), edge computing, and radio access network (RAN) slicing technologies. In this thesis, we aim to use SDN as a key enabler to enhance resource management in next-generation networks. SDN allows programmable management of edge computing resources and dynamic orchestration of RAN slicing. However, achieving efficient performance based on SDN capabilities is a challenging task due to the permanent fluctuations of traffic in next-generation networks and the diversified quality of service requirements of emerging applications. Toward our objective, we address the load balancing problem in distributed SDN architectures, and we optimize the RAN slicing of communication and computation resources in the edge of the network. In the first part of this thesis, we present a proactive approach to balance the load in a distributed SDN control plane using the data plane component migration mechanism. First, we propose prediction models that forecast the load of SDN controllers in the long term. By using these models, we can preemptively detect whether the load will be unbalanced in the control plane and, thus, schedule migration operations in advance. Second, we improve the migration operation performance by optimizing the tradeoff between a load balancing factor and the cost of migration operations. This proactive load balancing approach not only avoids SDN controllers from being overloaded, but also allows a judicious selection of which data plane component should be migrated and where the migration should happen. In the second part of this thesis, we propose two RAN slicing schemes that efficiently allocate the communication and the computation resources in the edge of the network. The first RAN slicing scheme performs the allocation of radio resource blocks (RBs) to end-users in two time-scales, namely in a large time-scale and in a small time-scale. In the large time-scale, an SDN controller allocates to each base station a number of RBs from a shared radio RBs pool, according to its requirements in terms of delay and data rate. In the short time-scale, each base station assigns its available resources to its end-users and requests, if needed, additional resources from adjacent base stations. The second RAN slicing scheme jointly allocates the RBs and computation resources available in edge computing servers based on an open RAN architecture. We develop, for the proposed RAN slicing schemes, reinforcement learning and deep reinforcement learning algorithms to dynamically allocate RAN resources.La 5G et au-delà de la 5G/6G sont censées dessiner la future croissance économique de multiples industries verticales en fournissant l'infrastructure réseau nécessaire pour permettre l'innovation et la création de nouveaux modèles économiques. Elles permettent d'offrir un large spectre de services, à savoir des débits de données plus élevés, une latence ultra-faible et une fiabilité élevée. Pour tenir leurs promesses, la 5G et au-delà de la-5G/6G s'appuient sur le réseau défini par logiciel (SDN), l’informatique en périphérie et le découpage du réseau d'accès (RAN). Dans cette thèse, nous visons à utiliser le SDN en tant qu'outil clé pour améliorer la gestion des ressources dans les réseaux de nouvelle génération. Le SDN permet une gestion programmable des ressources informatiques en périphérie et une orchestration dynamique de découpage du RAN. Cependant, atteindre une performance efficace en se basant sur le SDN est une tâche difficile due aux fluctuations permanentes du trafic dans les réseaux de nouvelle génération et aux exigences de qualité de service diversifiées des applications émergentes. Pour atteindre notre objectif, nous abordons le problème de l'équilibrage de charge dans les architectures SDN distribuées, et nous optimisons le découpage du RAN des ressources de communication et de calcul à la périphérie du réseau. Dans la première partie de cette thèse, nous présentons une approche proactive pour équilibrer la charge dans un plan de contrôle SDN distribué en utilisant le mécanisme de migration des composants du plan de données. Tout d'abord, nous proposons des modèles pour prédire la charge des contrôleurs SDN à long terme. En utilisant ces modèles, nous pouvons détecter de manière préemptive si la charge sera déséquilibrée dans le plan de contrôle et, ainsi, programmer des opérations de migration à l'avance. Ensuite, nous améliorons les performances des opérations de migration en optimisant le compromis entre un facteur d'équilibrage de charge et le coût des opérations de migration. Cette approche proactive d'équilibrage de charge permet non seulement d'éviter la surcharge des contrôleurs SDN, mais aussi de choisir judicieusement le composant du plan de données à migrer et l'endroit où la migration devrait avoir lieu. Dans la deuxième partie de cette thèse, nous proposons deux mécanismes de découpage du RAN qui allouent efficacement les ressources de communication et de calcul à la périphérie des réseaux. Le premier mécanisme de découpage du RAN effectue l'allocation des blocs de ressources radio (RBs) aux utilisateurs finaux en deux échelles de temps, à savoir dans une échelle de temps large et dans une échelle de temps courte. Dans l’échelle de temps large, un contrôleur SDN attribue à chaque station de base un certain nombre de RB à partir d'un pool de RB radio partagé, en fonction de ses besoins en termes de délai et de débit. Dans l’échelle de temps courte, chaque station de base attribue ses ressources disponibles à ses utilisateurs finaux et demande, si nécessaire, des ressources supplémentaires aux stations de base adjacentes. Le deuxième mécanisme de découpage du RAN alloue conjointement les RB et les ressources de calcul disponibles dans les serveurs de l’informatique en périphérie en se basant sur une architecture RAN ouverte. Nous développons, pour les mécanismes de découpage du RAN proposés, des algorithmes d'apprentissage par renforcement et d'apprentissage par renforcement profond pour allouer dynamiquement les ressources du RAN
    corecore