14 research outputs found

    Classification of Arabic Autograph as Genuine ‎And Forged through a Combination of New ‎Attribute Extraction Techniques

    Get PDF
    تقترح هذه الدراسة إطارا جديدا لتقنية التحقق من التوقيع العربي. وهو يستخلص بعض السمات الديناميكية للتمييز بين التوقيعات المزورة والحقيقية. لهذا الغرض، يستخدم هذا الإطار التكيف وضعية النافذة لاستخراج تفرد من الموقعين في التوقيع بخط اليد والخصائص المحددة من الموقعين. وبناء على هذا الإطار، تقسم التوقيعات العربية أولا إلى نوافذ 14 × 14؛ كل جزء واسع بما فيه الكفاية لإدخال معلومات وافية عن أنماط الموقعين وصغيرة بما فيه الكفاية للسماح بالمعالجة السريعة. ثم، تم اقتراح نوعين من الميزات على أساس تحويل جيب التمام المنفصل، تحويل المويجة المنفصلة لاستخلاص الميزات من المنطقة ذات الاهتمام. وأخيرا، يتم اختيار شجرة القرار لتصنيف التوقيعات باستخدام الميزات المذكورة كمدخلات لها. وتجرى التقييمات على التوقيعات العربية. وكانت النتائج مشجعة جدا مع معدل تحقق 99.75٪ لاختيار سلسلة من للتوقيعات المزورة والحقيقية للتوقيعات العربية التي تفوقت بشكل ملحوظ على أحدث الأعمال في هذا المجالThis study proposes a new framework for an Arabic autograph verification technique. It extracts certain dynamic attributes to distinguish between forged and genuine signatures. For this aim, this framework uses Adaptive Window Positioning to extract the uniqueness of signers in handwritten signatures and the specific characteristics of signers. Based on this framework, Arabic autograph are first divided into 14X14 windows; each fragment is wide enough to include sufficient information about signers’ styles and small enough to allow fast processing. Then, two types of fused attributes based on Discrete Cosine Transform and Discrete Wavelet Transform of region of interest have been proposed for attributes extraction. Finally, the Decision Tree is chosen to classify the autographs using the previous attributes as its input. The evaluations are carried out on the Arabic autograph. The results are very encouraging with verification rate 99.75% for sequential selection of forged and genuine autographs for Arabic autograph that significantly outperformed the most recent work in this fiel

    New Attributes Extraction System for Arabic Autograph as Genuine and Forged through a Classification Techniques

    Get PDF
    The authentication of writers, handwritten autograph is widely realized throughout the world, the thorough check of the autograph is important before going to the outcome about the signer. The Arabic autograph has unique characteristics; it includes lines, and overlapping. It will be more difficult to realize higher achievement accuracy. This project attention the above difficulty by achieved selected best characteristics of Arabic autograph authentication, characterized by the number of attributes representing for each autograph. Where the objective is to differentiate if an obtain autograph is genuine, or a forgery. The planned method is based on Discrete Cosine Transform (DCT) to extract feature, then Spars Principal Component Analysis (SPCA) to selection significant attributes for Arabic autograph handwritten recognition to aid the authentication step. Finally, decision tree classifier was achieved for signature authentication. The suggested method DCT with SPCA achieves good outcomes for Arabic autograph dataset when we have verified on various techniques

    Cycle Time Estimation in a Semiconductor Wafer Fab: A concatenated Machine Learning Approach

    Get PDF
    Die fortschreitende Digitalisierung aller Bereiche des Lebens und der Industrie lässt die Nachfrage nach Mikrochips steigen. Immer mehr Branchen – unter anderem auch die Automobilindustrie – stellen fest, dass die Lieferketten heutzutage von den Halbleiterherstellern abhängig sind, was kürzlich zur Halbleiterkrise geführt hat. Diese Situation erhöht den Bedarf an genauen Vorhersagen von Lieferzeiten von Halbleitern. Da aber deren Produktion extrem schwierig ist, sind solche Schätzungen nicht einfach zu erstellen. Gängige Ansätze sind entweder zu simpel (z.B. Mittelwert- oder rollierende Mittelwertschätzer) oder benötigen zu viel Zeit für detaillierte Szenarioanalysen (z.B. ereignisdiskrete Simulationen). Daher wird in dieser Arbeit eine neue Methodik vorgeschlagen, die genauer als Mittelwert- oder rollierende Mittelwertschätzer, aber schneller als Simulationen sein soll. Diese Methodik nutzt eine Verkettung von Modellen des maschinellen Lernens, die in der Lage sind, Wartezeiten in einer Halbleiterfabrik auf der Grundlage einer Reihe von Merkmalen vorherzusagen. In dieser Arbeit wird diese Methodik entwickelt und analysiert. Sie umfasst eine detaillierte Analyse der für jedes Modell benötigten Merkmale, eine Analyse des genauen Produktionsprozesses, den jedes Produkt durchlaufen muss – was als "Route" bezeichnet wird – und entwickelte Strategien zur Bewältigung von Unsicherheiten, wenn die Merkmalswerte in der Zukunft nicht bekannt sind. Zusätzlichwird die vorgeschlagene Methodik mit realen Betriebsdaten aus einerWafer-Fabrik der Robert Bosch GmbH evaluiert. Es kann gezeigt werden, dass die Methodik den Mittelwert- und Rollierenden Mittelwertschätzern überlegen ist, insbesondere in Situationen, in denen die Zykluszeit eines Loses signifikant vom Mittelwert abweicht. Zusätzlich kann gezeigt werden, dass die Ausführungszeit der Methode signifikant kürzer ist als die einer detaillierten Simulation

    Classification and Compression of Multi-Resolution Vectors: A Tree Structured Vector Quantizer Approach

    Get PDF
    Tree structured classifiers and quantizers have been used withgood success for problems ranging from successive refinement coding of speechand images to classification of texture, faces and radar returns. Althoughthese methods have worked well in practice there are few results on thetheoretical side. We present several existing algorithms for tree structured clustering using multi-resolution data and develop some results on their convergenceand asymptotic performance. We show that greedy growing algorithms will result in asymptoticdistortion going to zero for the case of quantizers and prove terminationin finite time for constraints on the rate. We derive an online algorithmfor the minimization of distortion. We also show that a multiscale LVQalgorithm for the design of a tree structured classifier converges to anequilibrium point of a related ordinary differential equation.Simulation results and description of several applications are used toillustrate the advantages of this approach

    Predicting software Size and Development Effort: Models Based on Stepwise Refinement

    Get PDF
    This study designed a Software Size Model and an Effort Prediction Model, then performed an empirical analysis of these two models. Each model design began with identifying its objectives, which led to describing the concept to be measured and the meta-model. The numerical assignment rules were then developed, providing a basis for size measurement and effort prediction across software engineering projects. The Software Size Model was designed to test the hypothesis that a software size measure represents the amount of knowledge acquired and stored in software artifacts, and the amount of time it took to acquire and store this knowledge. The Effort Prediction Model is based on the estimation by analogy approach and was designed to test the hypothesis that this model will produce reasonably close predictions when it uses historical data that conforms to the Software Size Model. The empirical study implemented each model, collected and recorded software size data from software engineering project deliverables, simulated effort prediction using the jack knife approach, and computed the absolute relative error and magnitude of relative error (MRE) statistics. This study resulted in 35.3% of the predictions having an MRE value at or below twenty-five percent. This result satisfies the criteria established for the study of having at least 31 % of the predictions with a MRE of25% or less. This study is significant for three reasons. First, no subjective factors were used to estimate effort. The elimination of subjective factors removes a source of error in the predictions and makes the study easier to replicate. Second, both models were described using metrology and measurement theory principles. This allows others to consistently implement the models and to modify these models while maintaining the integrity of the models\u27 objectives. Third, the study\u27s hypotheses were validated even though the software artifacts used to collect the software size data varied significantly in both content and quality. Recommendations for further study include applying the Software Size Model to other data-driven estimation models, collecting and using software size data from industry projects, looking at alternatives for how text-based software knowledge is identified and counted, and studying the impact of project cycles and project roles on predicting effort

    Status of tree ordinances in South Carolina

    Get PDF
    In order to support citizens’ efforts to move closer to healthy, stable, and tree-canopied surroundings, it is helpful to know which communities have ordinances and what they are regulating. A number of local governments have faced issues related to drafting and implementing tree ordinances. They know what works well and what doesn’t for their communities, and fortunately, they have been willing to share the information. This study was intended to collect this information in order to enhance awareness of current policy initiatives

    Status of tree ordinances in South Carolina, 2003 October

    Get PDF

    Airborne hyperspectral data for the classification of tree species a temperate forests

    No full text
    The review focuses on use of airborne hyperspectral imagery in forest species classification. Studies mentioned in the review concern hyperspectral image classification with use of various methods. Only research, where study area is located in Europe or North America were selected. Articles were reviewed with respect to used pre−processing methods, methods of feature selection or feature extraction, algorithms of image classification and trees species which were classified. The whole process of acquiring and working with hyperspectral data is described. Different approaches (e.g. use or skip atmospheric corrections) were compared. In each article, various deciduous and conifer species were classified. Studies comparing several classification algorithms (Spectral Angle Mapper, Support Vector Machine, Random Forest) were mentioned. In most cases SVM gives the best results. Species, which are classified with the highest accuracy, include Scots pine (Pinus sylvestris) and Norway spruce (Picea abies). Broadleaved species are, in general, classified with lower accuracy than conifer ones. Within broadleaved trees, European beech (Fagus sylvatica) and oaks (Quercus sp.) are classified with the highest accuracy
    corecore