4 research outputs found

    Implémentation de PCM (Process Compact Models) pour l’étude et l’amélioration de la variabilité des technologies CMOS FDSOI avancées

    Get PDF
    Recently, the race for miniaturization has seen its growth slow because of technological challenges it entails. These barriers include the increasing impact of the local variability and processes from the increasing complexity of the manufacturing process and miniaturization, in addition to the difficult of reducing the channel length. To address these challenges, new architectures, very different from the traditional one (bulk), have been proposed. However these new architectures require more effort to be industrialized. Increasing complexity and development time require larger financial investments. In fact there is a real need to improve the development and optimization of devices. This work gives some tips in order to achieve these goals. The idea to address the problem is to reduce the number of trials required to find the optimal manufacturing process. The optimal process is one that results in a device whose performance and dispersion reach the predefined aims. The idea developed in this thesis is to combine TCAD tool and compact models in order to build and calibrate what is called PCM (Process Compact Model). PCM is an analytical model that establishes linkages between process and electrical parameters of the MOSFET. It takes both the benefits of TCAD (since it connects directly to the process parameters electrical parameters) and compact (since the model is analytic and therefore faster to calculate). A sufficiently robust predictive and PCM can be used to optimize performance and overall variability of the transistor through an appropriate optimization algorithm. This approach is different from traditional development methods that rely heavily on scientific expertise and successive tests in order to improve the system. Indeed this approach provides a deterministic and robust mathematical framework to the problem. The concept was developed, tested and applied to transistors 28 and 14 nm FD-SOI and to TCAD simulations. The results are presented and recommendations to implement it at industrial scale are provided. Some perspectives and applications are likewise suggested.Récemment, la course à la miniaturisation a vue sa progression ralentir à cause des défis technologiques qu’elle implique. Parmi ces obstacles, on trouve l’impact croissant de la variabilité local et process émanant de la complexité croissante du processus de fabrication et de la miniaturisation, en plus de la difficulté à réduire la longueur du canal. Afin de relever ces défis, de nouvelles architectures, très différentes de celle traditionnelle (bulk), ont été proposées. Cependant ces nouvelles architectures demandent plus d’efforts pour être industrialisées. L’augmentation de la complexité et du temps de développement requièrent de plus gros investissements financier. De fait il existe un besoin réel d’améliorer le développement et l’optimisation des dispositifs. Ce travail donne quelques pistes dans le but d’atteindre ces objectifs. L’idée, pour répondre au problème, est de réduire le nombre d’essai nécessaire pour trouver le processus de fabrication optimal. Le processus optimal est celui qui conduit à un dispositif dont les performances et leur dispersion atteignent les objectifs prédéfinis. L’idée développée dans cette thèse est de combiner l’outil TCAD et les modèles compacts dans le but de construire et calibrer ce que l’on appelle un PCM (Process Compact Model). Un PCM est un modèle analytique qui établit les liens entre les paramètres process et électriques du MOSFET. Il tire à la fois les bénéfices de la TCAD (puisqu’il relie directement les paramètres process aux paramètres électriques) et du modèle compact (puisque le modèle est analytique et donc rapide à calculer). Un PCM suffisamment prédictif et robuste peut être utilisé pour optimiser les performances et la variabilité globale du transistor grâce à un algorithme d’optimisation approprié. Cette approche est différente des méthodes de développement classiques qui font largement appel à l’expertise scientifique et à des essais successifs dans le but d’améliorer le dispositif. En effet cette approche apporte un cadre mathématique déterministe et robuste au problème.Le concept a été développé, testé et appliqué aux transistors 28 et 14 nm FD-SOI ainsi qu’aux simulations TCAD. Les résultats sont exposés ainsi que les recommandations nécessaires pour implémenter la technique à échelle industrielle. Certaines perspectives et applications sont de même suggérées

    Classification approaches for microarray gene expression data analysis

    Get PDF
    The technology of Microarray is among the vital technological advancements in bioinformatics. Usually, microarray data is characterized by noisiness as well as increased dimensionality. Therefore, data, that is finely tuned, is a requirement for conducting the microarray data analysis. Classification of biological samples represents the most performed analysis on microarray data. This study is focused on the determination of the confidence level used for the classification of a sample of an unknown gene based on microarray data. A support vector machine classifier (SVM) was applied, and the results compared with other classifiers including K-nearest neighbor (KNN) and neural network (NN). Four datasets of microarray data including leukemia data set, prostate dataset, colon dataset, and breast dataset were used in the research. Additionally, the study analyzed two different kernels of SVM. These were radial kernel and linear kernels. The analysis was conducted by varying percentages of dataset distribution coupled with training and test datasets in order to make sure that the best positive sets of data provided the best results. The 10-fold cross validation method (LOOCV) and the L1 L2 techniques of regularization were used to get solutions for the over-fitting issues as well as feature selection in classification. The ROC curve and a confusion matrix were applied in performance assessment. K-nearest neighbor and neural network classifiers were trained with similar sets of data and comparison of the results was done. The results showed that the SVM exceeded the performance and accuracy compared to other classifiers. For each set of data, support vector machine was the best functional method based on the linear kernel since it yielded better results than the other methods. The highest accuracy of colon data was 83% with SVM classifier, while the accuracy of NN with the same data was 77% and KNN was 72%. Leukemia data had the highest accuracy of 97% with SVM, 85% with NN, and 91% with KNN. For breast data, the highest accuracy was 73% with SVM-L2, while the accuracy was 56% with NN and 47% with KNN. Finally, the highest accuracy of prostate data was 80% with SVM-L1, while the accuracy was 75% with NN and 66% with KNN. It showed the highest accuracy as well as the area under curve compared to k-nearest neighbor and neural network in the three different tests.Master of Science (MSc) in Computational Science

    Envelope-based support vector machines classification

    Get PDF
    Envelope methodology is a promising dimension reduction approach. It was introduced in the regression framework. In this work, we extended envelope application and focused on the reduce-and-classify approach in supervised learning. The first contribution is that we extended this method to classification and developed a new projection-based approach based on a Support Vector Machine (SVM) classifier. Our proposed classifier ESVM (Envelope-based Support Vector Machines) is obtained by combining the envelope method and SVM to achieve a better and more efficient classification. Using the idea of the envelope to extract a lower-dimensional subspace projected the data on has advanced the classification performance. The empirical results show a low misclassification rate based on ESVM Furthermore, we extended the ESVM classifier to sparse data. In that, the reducing subspace reduces the dimension and selects significant variables simultaneously. We employ an adaptive group lasso penalty to impose the sparsity in the reducing subspace. The classifier is evaluated based on simulation and real data
    corecore