14 research outputs found

    Therapeutic Approach to NAFLD-NASH

    Get PDF
    Nonalcoholic fatty liver disease (NAFLD) and its progressive form nonalcoholic steatohepatitis (NASH) are the hepatic expression of metabolic syndrome and may lead to serious injury to the liver resulting in cirrhosis and hepatocellular carcinoma (HCC). Despite its seriousness, there is no definite treatment to address this life-threatening condition. Weight loss and exercise remain the cornerstone of the therapeutic treatment but also an array of medications can be used with varying degrees on liver inflammation and cirrhosis. There is also an increased risk of cardiovascular events connected to NAFLD/NASH, which should also be addressed. Statins have been shown to reduce the lipid and the inflammatory burden of the liver as well as decrease the cardiovascular risk. Aspirin also has a beneficial effect due to its anti-inflammatory properties as well as Vitamin E in certain cases. The medications (metformin, pioglitazone, GLP-1 agonists, SGLT2 inhibitors) that interfere in glucose metabolism and the activity of insulin seem to play a vital role in the metabolism of glucose and lipids and subsequent amelioration of liver function tests and the inhibition of inflammation. The aim of this review is to highlight the efficacy of current therapeutic strategies and explore the variety of the emerging new agents which target newly discovered pathways associated with the pathogenesis of NAFLD/NASH with promising results

    A study on affect model validity : nominal vs ordinal labels

    Get PDF
    The question of representing emotion computationally remains largely unanswered: popular approaches require annotators to assign a magnitude (or a class) of some emotional dimension, while an alternative is to focus on the relationship between two or more options. Recent evidence in affective computing suggests that following a methodology of ordinal annotations and processing leads to better reliability and validity of the model. This paper compares the generality of classification methods versus preference learning methods in predicting the levels of arousal in two widely used affective datasets. Findings of this initial study further validate the hypothesis that approaching affect labels as ordinal data and building models via preference learning yields models of better validity.peer-reviewe

    Controllable exploration of a design space via interactive quality diversity

    Get PDF
    This paper introduces a user-driven evolutionary algorithm based on Quality Diversity (QD) search. During a design session, the user iteratively selects among presented alternatives and their selections affect the upcoming results. We implement a variation of the MAP-Elites algorithm where the presented alternatives are sampled from a small region (window) of the behavioral space. After a user selection, the window is centered on the selected individual’s behavior characterization, evolution selects parents from within this window to produce offspring, and new alternatives are sampled. Essentially we define an adaptive system of local QD search, where the user’s selections guide the search towards specific regions of the behavioral space. The system is tested on the generation of architectural layouts, a constrained optimization task, leveraging QD search through a two-archive approach.peer-reviewe

    A general-purpose expressive algorithm for room-based environments

    Get PDF
    This paper presents a generative architecture for general-purpose room layouts that can be treated as geometric definitions of dungeons, mansions, shooter levels and more. The motivation behind this work is to provide a design tool for virtual environments that combines aspects of controllability, expressivity and generality. Towards that end, a two-tier level representation is realized, with a graph-based design specification constraining and guiding the generated geometries, facilitated by constrained evolutionary search. Expressivity is secured through quality-diversity search which can provide the designer with a broad variety of level layouts to choose from. Finally, the generator is general-purpose as it can produce layouts based on different types of static grid structures or as freeform, curved structures through an adaptive Voronoi diagram that is evolved along with the level itself. The method is tested on a variety of design specifications and grid types, and results show that even with complex design constraints or malleable grids the algorithm can produce a broad variety of levels.peer-reviewe

    Design space exploration of shell structures using quality diversity algorithms

    Get PDF
    Computer-aided optimization algorithms in structural engineering have historically focused on the structural performance of generated forms, often resulting in the selection of a single ‘optimal’ solution. However, diversity of generated solutions is desirable when those solutions are shown to a human user to choose from. Quality-Diversity (QD) search is an emerging field of Evolutionary Computation which can automate the exploration of the solution space in engineering problems. QD algorithms, such as MAP-Elites, operate by maintaining and expanding an archive of diverse solutions, optimising for quality in local niches of a multidimensional design space. The generated archive of solutions can help engineers gain a better overview of the solution space, illuminating which designs are possible and their trade-offs. In this paper we apply Quality Diversity search to the problem of designing shell structures. Since the design of shell structures comes with physical constraints, we leverage a constrained optimization variant of the MAP-Elites algorithm, FI-MAP-Elites. We implement our proposed methodology within the Rhino/Grasshopper environment and use the Karamba Finite Element Analysis solver for all structural engineering calculations. We test our method on case studies of parametric models of shell structures that feature varying complexity. Our experiments investigate the algorithm’s ability to illuminate the solution space and generate feasible and high-quality solutions.peer-reviewe

    Transforming scholarship in the archives through handwritten text recognition:Transkribus as a case study

    Get PDF
    Purpose: An overview of the current use of handwritten text recognition (HTR) on archival manuscript material, as provided by the EU H2020 funded Transkribus platform. It explains HTR, demonstrates Transkribus, gives examples of use cases, highlights the affect HTR may have on scholarship, and evidences this turning point of the advanced use of digitised heritage content. The paper aims to discuss these issues. - Design/methodology/approach: This paper adopts a case study approach, using the development and delivery of the one openly available HTR platform for manuscript material. - Findings: Transkribus has demonstrated that HTR is now a useable technology that can be employed in conjunction with mass digitisation to generate accurate transcripts of archival material. Use cases are demonstrated, and a cooperative model is suggested as a way to ensure sustainability and scaling of the platform. However, funding and resourcing issues are identified. - Research limitations/implications: The paper presents results from projects: further user studies could be undertaken involving interviews, surveys, etc. - Practical implications: Only HTR provided via Transkribus is covered: however, this is the only publicly available platform for HTR on individual collections of historical documents at time of writing and it represents the current state-of-the-art in this field. - Social implications: The increased access to information contained within historical texts has the potential to be transformational for both institutions and individuals. - Originality/value: This is the first published overview of how HTR is used by a wide archival studies community, reporting and showcasing current application of handwriting technology in the cultural heritage sector

    Familial hypercholesterolaemia in children and adolescents from 48 countries: a cross-sectional study

    Get PDF
    Background: Approximately 450 000 children are born with familial hypercholesterolaemia worldwide every year, yet only 2·1% of adults with familial hypercholesterolaemia were diagnosed before age 18 years via current diagnostic approaches, which are derived from observations in adults. We aimed to characterise children and adolescents with heterozygous familial hypercholesterolaemia (HeFH) and understand current approaches to the identification and management of familial hypercholesterolaemia to inform future public health strategies. Methods: For this cross-sectional study, we assessed children and adolescents younger than 18 years with a clinical or genetic diagnosis of HeFH at the time of entry into the Familial Hypercholesterolaemia Studies Collaboration (FHSC) registry between Oct 1, 2015, and Jan 31, 2021. Data in the registry were collected from 55 regional or national registries in 48 countries. Diagnoses relying on self-reported history of familial hypercholesterolaemia and suspected secondary hypercholesterolaemia were excluded from the registry; people with untreated LDL cholesterol (LDL-C) of at least 13·0 mmol/L were excluded from this study. Data were assessed overall and by WHO region, World Bank country income status, age, diagnostic criteria, and index-case status. The main outcome of this study was to assess current identification and management of children and adolescents with familial hypercholesterolaemia. Findings: Of 63 093 individuals in the FHSC registry, 11 848 (18·8%) were children or adolescents younger than 18 years with HeFH and were included in this study; 5756 (50·2%) of 11 476 included individuals were female and 5720 (49·8%) were male. Sex data were missing for 372 (3·1%) of 11 848 individuals. Median age at registry entry was 9·6 years (IQR 5·8-13·2). 10 099 (89·9%) of 11 235 included individuals had a final genetically confirmed diagnosis of familial hypercholesterolaemia and 1136 (10·1%) had a clinical diagnosis. Genetically confirmed diagnosis data or clinical diagnosis data were missing for 613 (5·2%) of 11 848 individuals. Genetic diagnosis was more common in children and adolescents from high-income countries (9427 [92·4%] of 10 202) than in children and adolescents from non-high-income countries (199 [48·0%] of 415). 3414 (31·6%) of 10 804 children or adolescents were index cases. Familial-hypercholesterolaemia-related physical signs, cardiovascular risk factors, and cardiovascular disease were uncommon, but were more common in non-high-income countries. 7557 (72·4%) of 10 428 included children or adolescents were not taking lipid-lowering medication (LLM) and had a median LDL-C of 5·00 mmol/L (IQR 4·05-6·08). Compared with genetic diagnosis, the use of unadapted clinical criteria intended for use in adults and reliant on more extreme phenotypes could result in 50-75% of children and adolescents with familial hypercholesterolaemia not being identified. Interpretation: Clinical characteristics observed in adults with familial hypercholesterolaemia are uncommon in children and adolescents with familial hypercholesterolaemia, hence detection in this age group relies on measurement of LDL-C and genetic confirmation. Where genetic testing is unavailable, increased availability and use of LDL-C measurements in the first few years of life could help reduce the current gap between prevalence and detection, enabling increased use of combination LLM to reach recommended LDL-C targets early in life

    Call handling and mobility management in wireless ATM networks.

    No full text
    This thesis begins by addressing the problems and challenges faced in a multimedia, ATM compatible. Wireless LAN environment. A brief overview of the ATM will also be presented. An extension to the conventional wireless (cellular) architecture, which takes advantage of the ATM characteristics, is considered. The needs of the applications that will use such a network along with the services this network is expected to offer, are discussed. The existing wireless protocols (both in the wireless LANs and in the cellular architecture) are presented. The differences among the MAC schemes and discussion on the criteria they must satisfy to support ATM are brought up. Finally, the most promising MAC schemes are discussed in further details. Furthermore, the introduction of terminal and user mobility in an ATM LAN causes the need for modification of the existing network functions. There are new problems, associated with the mobile stations that must be addressed, such as location management, paging, registration, authentication and network security and handover implementation. Furthermore, existing functions of the fixed ATM, such as Connection Admission Control and traffic shaping need to be extended to support the requested QoS in the wireless environment. The next part of the report discusses the different types of handover mechanisms and presents possible extensions of the UNI and PNNI that support the exchange of handover messages. The concept of the Mobile Agent (MA) is also introduced and its use in the extended UNI and PNNI for handover execution, registration and location management purposes is presented. Finally, a further extension of the PNNI protocol, which could be used among the different MAs, in order to support portability across different private ATM LANs, is being discussed. Without a doubt there will be a requirement for interworking between ATM and the already established wireless networks (e.g. HIPERLAN, DECT, IEEE 802.). The use of ATM as a wireless network backbone is particularly advantageous in microcell and/or integrated voice/data scenarios, and is cost-competitive with other possible implementations. Taking that into consideration, the transmission of ATM cells over a WLAN, based on the IEEE 802.11 MAC layer has been investigated. Initially, the IEEE 802.11 MAC layer and its model (developed in BONeS software package), are being discussed. The simulation results show the existence of upper bound delays for delay sensitive applications, such as voice and video, which are not affected by the traffic load on the network. Moreover, a power saving addition to a Dynamic Time Division Multiple Access (D-TDMA) MAC protocol, suitable for ATM cell transmission, discussed in [APOS95], will be proposed. A short presentation of the MAC protocol and the proposed power saving algorithm will follow. The trade-off between the power saving gain and the size of the buffer in the Access Point (AP) are shown for different kinds of services. Finally (based on Markov chains), the calculation of the call blocking and dropping probabilities for different services in a radio environment will be addressed. This method considers both the different QoS requirements for each service and the load on the network. The results obtained with the analysis are being compared to the ones obtained from simulations

    Call handling and mobility management in wireless ATM networks.

    No full text
    This thesis begins by addressing the problems and challenges faced in a multimedia, ATM compatible. Wireless LAN environment. A brief overview of the ATM will also be presented. An extension to the conventional wireless (cellular) architecture, which takes advantage of the ATM characteristics, is considered. The needs of the applications that will use such a network along with the services this network is expected to offer, are discussed. The existing wireless protocols (both in the wireless LANs and in the cellular architecture) are presented. The differences among the MAC schemes and discussion on the criteria they must satisfy to support ATM are brought up. Finally, the most promising MAC schemes are discussed in further details. Furthermore, the introduction of terminal and user mobility in an ATM LAN causes the need for modification of the existing network functions. There are new problems, associated with the mobile stations that must be addressed, such as location management, paging, registration, authentication and network security and handover implementation. Furthermore, existing functions of the fixed ATM, such as Connection Admission Control and traffic shaping need to be extended to support the requested QoS in the wireless environment. The next part of the report discusses the different types of handover mechanisms and presents possible extensions of the UNI and PNNI that support the exchange of handover messages. The concept of the Mobile Agent (MA) is also introduced and its use in the extended UNI and PNNI for handover execution, registration and location management purposes is presented. Finally, a further extension of the PNNI protocol, which could be used among the different MAs, in order to support portability across different private ATM LANs, is being discussed. Without a doubt there will be a requirement for interworking between ATM and the already established wireless networks (e.g. HIPERLAN, DECT, IEEE 802.). The use of ATM as a wireless network backbone is particularly advantageous in microcell and/or integrated voice/data scenarios, and is cost-competitive with other possible implementations. Taking that into consideration, the transmission of ATM cells over a WLAN, based on the IEEE 802.11 MAC layer has been investigated. Initially, the IEEE 802.11 MAC layer and its model (developed in BONeS software package), are being discussed. The simulation results show the existence of upper bound delays for delay sensitive applications, such as voice and video, which are not affected by the traffic load on the network. Moreover, a power saving addition to a Dynamic Time Division Multiple Access (D-TDMA) MAC protocol, suitable for ATM cell transmission, discussed in [APOS95], will be proposed. A short presentation of the MAC protocol and the proposed power saving algorithm will follow. The trade-off between the power saving gain and the size of the buffer in the Access Point (AP) are shown for different kinds of services. Finally (based on Markov chains), the calculation of the call blocking and dropping probabilities for different services in a radio environment will be addressed. This method considers both the different QoS requirements for each service and the load on the network. The results obtained with the analysis are being compared to the ones obtained from simulations

    Μοντέλα σφαλμάτων, αλγόριθμοι και ενσωματωμένες τεχνικές ελέγχου ορθής λειτουργίας κυκλωμάτων μνημών DRAM

    No full text
    Due to the revolutionary progress in the manufacturing process of Integrated Circuits (ICs) the last decades, electronic systems have become a part of everyday life. The direct results of this progress are the increased computing and storage capability of electronic systems, at an affordable or even low cost, and the mobility. However, although this progress rate has been constantly high for almost five decades, there are various threats to the further evolution of semiconductor technologies. One of the greatest threats is the rapidly increasing difficulty in testing ICs.Dynamic Random Access Memories (DRAMs) are one of the most important parts in digital systems, both from a performance or a system failure perspective. Thus, their reliability is critical. Moreover, like in all IC technologies, the reliability issues grow more rapidly than the evolution of the manufacturing processes. Consequently, even if the manufacturing evolution is important, the development of new, more efficient and more reliable testing solutions turns out to be of equal importance.The problems in testing and reliability of ICs mainly stem from the dimension shrinking of electronic devices aiming to scale up their integration in a small silicon area. In DRAMs, the shrinking of memory cells’ dimensions and their in-between distances arise various undesired side effects. Among the most important side effects is the increased interaction between neighbouring cells. This interaction can cause complex faulty behaviors that are frequently hard to be detected since they appear only under the presence of specific conditions (e.g. the neighbouring cells are at a certain state).The neighbouring cell interaction issue is addressed in this dissertation with two different approaches. In the first approach, we refine an existing fault model that deals with neighbouring cell interactions, the NPSF model, in order to provide test solutions with an acceptable cost in test application time. The test application time reduction achieved by our new test algorithm is 57.7% with respect to well known test algorithms that cover these faults. At the same time we provide test solutions for the cases where the NPSF faults are combined with the Bit-Line influence and Word-Line capacitive coupling related faults. In the second approach, we propose a new fault model, the Neighborhood Leakage and Transition Fault – NLTF model, which targets specific well known interaction mechanisms. The test solution that derived from this new fault model further reduces the test application time up to 87% with respect to well known test algorithms that are also capable to cover these faults.Another difficulty in DRAM memory testing is the fact that even the simplest defect can produce a quite complex faulty behavior. The main reason is that in practice a DRAM is an analogue circuit. Our electrical simulations on a DRAM circuit with a resistive open defect manifested this complex faulty behavior. Moreover, we observed an important unknown phenomenon, the charge accumulation, that significantly influences the testing procedure. Based on our observations we developed an efficient test algorithm that provides enhanced coverage of resistive open faults with respect to existing solutions.Finally, one of the most attractive testing solutions is the Built-In Self-Test (BIST) circuits, which have gained great attention during the last two decades. Towards this direction, we have developed a BIST circuit that implements the NLTF test algorithm for DRAM testing. The outcome of this task manifested the ability to efficiently embed complex test algorithms in a memory at a low silicon area and design cost. The functionality of the BIST circuit was verified through simulations.Τις τελευταίες δεκαετίες οι ηλεκτρονικές συσκευές έχουν γίνει αναπόσπαστο κομμάτι της καθημερινότητας. Αυτό οφείλεται κυρίως στη ραγδαία πρόοδο της τεχνολογίας κατασκευής Ολοκληρωμένων Κυκλωμάτων (Ο.Κ.), η οποία επιτρέπει τη σμίκρυνση των διαστάσεων των ηλεκτρονικών κυκλωματικών στοιχείων και την ολοκλήρωση όλο και περισσότερων ηλεκτρονικών διατάξεων σε μια μικρή επιφάνεια πυριτίου. Ως άμεσο αποτέλεσμα αυτής της προόδου, τα σύγχρονα ηλεκτρονικά συστήματα συνδυάζουν εξαιρετικές επιδόσεις σε υπολογιστική ισχύ και αποθηκευτικό χώρο, φορητότητα και ικανοποιητική αυτονομία, ενώ ταυτόχρονα το χαμηλό κόστος τα κάνει προσιτά σε όλο σχεδόν το αγοραστικό κοινό. Η εξέλιξη της τεχνολογίας κατασκευής ολοκληρωμένων κυκλωμάτων υπήρξε σχεδόν σταθερή για αρκετές δεκαετίες. Ταυτόχρονα όμως διάφοροι παράγοντες που δυσχεραίνουν την περεταίρω πρόοδο αυτής της τεχνολογίας κάνουν ολο και περισσότερο αισθητή την παρουσία τους. Ένας από τους βασικότερους παράγοντες είναι η ραγδαία αύξηση της δυσκολίας Ελέγχου Ορθής Λειτουργίας των ολοκληρωμένων κυκλωμάτων. Έλεγχος Ορθής Λειτουργίας (Ε.Ο.Λ.) είναι η διαδικασία που πραγματοποιείται στα ολοκληρωμένα κυκλώματα μετά την κατασκευή τους προκειμένου να διαπιστωθεί αν λειτουργούν σωστά και σύμφωνα με τις προδιαγραφές.Οι Δυναμικές Μνήμες Τυχαίας Προσπέλασης (Dynamic Random Access Memories – DRAMs) είναι ανάμεσα στα πιο κρίσιμα μέρη των σύγχρονων ψηφιακών συστημάτων γιατί παίζουν καθοριστικό ρόλο τόσο από πλευράς επιδόσεων όσο και από πλευράς αξιοπιστίας ενός συστήματος. Σε ότι αφορά την αξιοπιστία ενός ψηφιακού συστήματος, η αστοχία της κύριας μνήμης, η οποία είναι σχεδόν πάντα τύπου DRAM, είναι μια από τις συχνότερες αιτίες αστοχίας του συστήματος. Επομένως η αξιοπιστία των μνημών DRAM είναι κρίσιμη. Επιπρόσθετα, όπως συμβαίνει και με τους άλλους τύπους ολοκληρωμένων κυκλωμάτων, τα προβλήματα αξιοπιστίας των ολοκληρωμένων κυκλωμάτων μνημών DRAM αυξάνονται με ρυθμό ταχύτερο ακόμα και από το ρυθμό που ακολουθεί η εξέλιξη της τεχνολογίας κατασκευής ολοκληρωμένων κυκλωμάτων. Κατά συνέπεια, η ανάγκη για ανάπτυξη νέων αποτελεσματικότερων και πιο αξιόπιστων αλγορίθμων Ε.Ο.Λ. μνημών DRAM είναι επιτακτική.Τα προβλήματα στον ΕΟΛ και στην αξιοπιστία των ολοκληρωμένων κυκλωμάτων οφείλονται κυρίως στη σμίκρυνση των διαστάσεων των στοιχειωδών κυκλωματικών στοιχείων. Στις DRAMs η σμίκρυνση των διαστάσεων των κυττάρων μνήμης, και των μεταξύ τους αποστάσεων επιφέρει ανεπιθύμητες επιδράσεις στη λειτουργία της μνήμης. Μία από τις σημαντικότερες είναι η αυξημένη αλληλεπίδραση μεταξύ γειτονικών κυττάρων μνήμης. Αυτή η αλληλεπίδραση μπορεί να προκαλέσει περίπλοκες εσφαλμένες συμπεριφορές οι οποίες συχνά είναι δύσκολο να εντοπιστούν κατά τη διάρκεια του Ε.Ο.Λ. καθώς πολύ συχνά εκδηλώνονται μόνο κάτω από συγκεκριμένες συνθήκες λειτουργίας (π.χ. όταν τα γειτονικά κύτταρα μνήμης βρίσκονται σε συγκεκριμμένη λογική κατάσταση).Η αλληλεπίδραση μεταξύ των γειτονικών κυττάρων σε DRAMs και τα προβλήματα που δημιουργεί στον Ε.Ο.Λ. προσεγγίζονται σε αυτή τη διατριβή με δύο διαφορετικές μεθοδολογίες. Στην πρώτη ακολουθούμε ένα υπάρχον μοντέλο σφαλμάτων που περιγράφει τις αλληλεπιδράσεις μεταξύ γειτονικών κυττάρων μνήμης, το NPSF. Βασιζόμενοι σε αυτό το μοντέλο αναπτύσσουμε ένα νέο αλγόριθμο Ε.Ο.Λ. ο οποίος επιτυγχάνει μείωση κόστους Ε.Ο.Λ. σε χρόνο εφαρμογής κατά 57.7% σε σχέση με υπάρχοντες αλγορίθμους που καλύπτουν τα ίδια σφάλματα. Ταυτόχρονα προτείνουμε αλγορίθμους που καλύπτουν τις περιπτώσεις όπου τα NPSF σφάλματα συνδυάζονται με την επίδραση της Γραμμής Δεδομένων (Bit-Line influence) και της χωρητικής σύζευξης μεταξύ των Γραμμών Ενεργοποίησης (Word-Line capacitive coupling). Στη δεύτερη μεθοδολογία αναπτύσσουμε ένα νέο μοντέλο σφαλμάτων, το Μοντέλο Σφαλμάτων Διαρροών και Μεταβάσεων Γειτονιάς (Neighborhood Leakage and Transition Fault – NLTF), το οποίο στοχεύει συγκεκριμμένους γνωστούς μηχανισμούς αλληλεπίδρασης. Ο αλγόριθμος Ε.Ο.Λ. που προκύπτει από το νέο μοντέλο μειώνει το κόστος σε χρόνο εφαρμογής κατά 87% σε σχέση με υπάρχοντες αλγορίθμους Ε.Ο.Λ. που καλύπτουν τα ίδια σφάλματα.Άλλη μια δυσκολία στον Ε.Ο.Λ. των DRAM μνημών είναι το γεγονός ότι ακόμα και το πιο απλό κατασκευαστικό ελάττωμα μπορεί να προκαλέσει περίπλοκη εσφαλμένη συμπεριφορά, η οποία συχνά θα είναι δύσκολο να ανιχνευθεί κατά τη διάρκεια των διεργασιών Ε.Ο.Λ. Ο κύριος λόγος που συμβαίνει αυτό είναι ότι οι DRAM μνήμες είναι στην πράξη αναλογικά (και όχι ψηφιακά) ηλεκτρονικά κυκλώματα. Προσομοιώσεις της λειτουργίας μιας DRAM με σφάλμα αντιστατικού ανοιχτοκυκλώματος (resistive open defect) οι οποίες παρουσιάζονται στην παρούσα διατριβή καταδεικνύουν αυτή την περίπλοκη συμπεριφορά. Επιπρόσθετα παρατηρήθηκε και αναλύθηκε για πρώτη φορά ένα νέο φαινόμενο, η συσσώρευση φορτίου (charge accumulation) το οποίο επηρρεάζει σημαντικά τη διαδικασία Ε.Ο.Λ. Βασισμένοι στα ερευνητικά μας αποτελέσματα προτείνουμε ένα νέο χαμηλού κόστους αλγόριθμο Ε.Ο.Λ. ο οποίος παρέχει βελτιωμένη κάλυψη των σφαλμάτων ανοιχτοκύκλωσης. Μιά από τις πιο ελκυστικές λύσεις στον Ε.Ο.Λ. είναι ο ενσωματωμένος αυτοέλεγχος (Built In Self Test – BIST), ο οποίος έχει κερδίσει μεγάλο ενδιαφέρον τις δύο τελευταίες δεκαετίες. Σε αυτή την κατεύθυνση αναπτύξαμε ένα κύκλωμα ενσωματωμένου αυτοελέγχου το οποίο χρησιμοποιεί τον αλγόριθμο Ε.Ο.Λ. του NLTF μοντέλου σφαλμάτων. Η λειτουργικότητα του κυκλώματος επιβεβαιώθηκε μέσω προσομοιώσεων. Αυτή η υλοποίηση του NLTF αλγορίθμου καταδυκνείει πως είναι δυνατό να ενσωματωθούν σε κύκλωμα ενσωματωμένου αυτοελέγχου αποτελεσματικά και με χαμηλό κόστος σε επιφάνεια πυριτίου ακόμα και περίπλοκοι αλγόριθμοι Ε.Ο.Λ.Τέλος, μιά από τις πιο ελκυστικές λύσεις στον Ε.Ο.Λ. είναι ο ενσωματωμένος αυτοέλεγχος (Built In Self Test – BIST), ο οποίος έχει προσελκύσει μεγάλο ενδιαφέρον τις δύο τελευταίες δεκαετίες. Σε αυτή την κατεύθυνση αναπτύξαμε ένα κύκλωμα ενσωματωμένου αυτοελέγχου DRAM το οποίο υλοποιεί τον αλγόριθμο Ε.Ο.Λ. για το NLTF μοντέλο σφαλμάτων. Η λειτουργικότητα του κυκλώματος επιβεβαιώθηκε μέσω προσομοιώσεων. Αυτή η υλοποίηση του NLTF αλγορίθμου κατέδειξε πως είναι εφικτό να ενσωματωθούν σε κύκλωμα ενσωματωμένου αυτοελέγχου, αποτελεσματικά και με χαμηλό κόστος σε επιφάνεια πυριτίου, περίπλοκοι αλγόριθμοι Ε.Ο.Λ. για κυκλώματα μνημών DRAM
    corecore