8 research outputs found

    Towards Reliable Brain-Computer Interface: Achieving Perfect Accuracy by Sacrificing Time

    Get PDF
    Aju-arvuti liides (AAL) on sĂŒsteem aju elektriliste impulside vĂ€ljavĂ”tmiseks janende kasutamiseks arvuti tarkvara juhtimiseks. AAL opereerimiseks peab kasutaja kontsentreeruma mingile mĂ”ttelisele ĂŒlesandele. Lisaks impulside mÔÔtmisele muudab AAL elekroonilisi signaale digitaalseks ja selle jĂ€rgi tuvastab vastava arvuti kĂ€su. Kahjuks on Ă”ige kĂ€su tuvastamise tĂ”enĂ€osus alati alla 100%, mistĂ”ttu AAL sĂŒsteemide tĂ”husus on vĂ”rdlemisi madal.Madal tĂ”husus on AAL-i jaoks suureks probleemiks, sest senikaua kuni needsĂŒsteemid pakuvad madalaid tuvastamise tĂ€psuseid, jÀÀvad need paljudes valdkondades ilma kasutamiseta. Antud probleemi lahendamiseks enamasti ĂŒritatakse tĂ”sta AAL-i tĂ€psust ĂŒhe kontsentreerumiskatse raames ja ei pöörata tĂ€helepanu kontsentreerumiskatse kestvusele. Meie lĂ€henemine aga pĂ”hineb arusaamisel, kui palju kontsentreerumiskatseid on vaja kasutajal jĂ€rjest teostada (s.t kui kaua aega on nĂ”utud), et saavutada 99% tĂ€psus.Selles töös kirjeldatud lahendus pĂ”hineb Condorcet kohtu teoreemil [1]. Teoreem vĂ€idab, et kui on olemas kaks valikuvĂ”imalust ja tĂ”enĂ€osus valida Ă”iget on suurem kui 50%, kui me teostame mitu valimiskatset jĂ€rjest, siis tĂ”enĂ€osus, et valitakse Ă”iget valikut tĂ”useb iga jĂ€rgneva valimiskatsega. Antud töös rakendasime pĂ”hilist Condorcet printsiipi aju-arvuti liidesele. KĂ”igepealt me arendame sĂŒsteemi, mis on suuteline saavutama ĂŒhe mĂ”ttelise ĂŒlesande kontsentreerumiskatse tĂ€psuseks rohkem kui 50% ja seejĂ€rel proovime lĂ€bi mitme kontsentreerumiskatse parandada keskmist tĂ€psust. Me eeldame, et kui kasutada piisavat kogust kontsentreerumiskatseid, siis me jĂ”uame 99% klassifitseerimistĂ€psuseni. Me vĂ”rdleme teoreetilisi tulemusi eksperimentaalsetega ning arutleme nende ĂŒle. AAL tehnoloogia on vĂ”rdlemisi uus valdkond. Selle tehnoloogia tĂ€ielik toomine meie igapĂ€evaellu nĂ”uab tugevat panust teadlastelt ja inseneridelt, et muuta AAL usaldusvÀÀrseks sĂŒsteemiks. Antud töö eesmĂ€rk on panustada AAL sĂŒsteemi kindlusesse.Brain-computer interface (BCI) is a computer system for extracting brain electricneural signals and using them to control computer applications. For the operationBCI requires a user to concentrate on some mental tasks. Besides measuringthe signals, BCI converts raw electric signal to digital representation and maps thedata to computer commands. Unfortunately, the probability of predicting the rightcommand is below 100% and therefore the reliability of these systems is relativelylow.Low reliability is a huge problem for BCI, since they will not be widely trustedand used while the prediction accuracy is low. The existing solutions usually tryto improve the prediction accuracy of BCI without focusing too much on the timewhat is required for a single user’s concentration attempt. They apply differentprediction models and signal processing techniques in order to raise the accuracyof prediction. Our solution goes the opposite way – it tries to discover how manyconcentration attempts should be done in a row (i.e how long does it take), toguarantee the prediction accuracy of 99%.The solution described in the thesis is based on Condorcet’s jury theorem [1].It states that if we have two options and the chance to pick correct is larger than50%, then, if we make several attempts in a row, the probability to pick the correctoption by majority vote is rising with the number of attempts. In this work weapply the main Condorcet’s principle in a BCI perspective. First we develop asystem that can reach the single concentration attempt’s prediction accuracy tobe more than 50% and then we use multiple concentration attempts in a row toimprove the overall accuracy. We expect that given enough attempts we can reach99% classification accuracy. We compare the empirical results with the theoreticalestimates and discuss them.The BCI technology is a relatively young field. In order to fully integrate itinto our ordinary life, the contribution from scientists and engineers is required forconverting BCI to a reliable system. The following work contributes to reliabilityof BCI systems

    Optimal instance selection for improved decision tree

    Get PDF
    Instance selection plays an important role in improving scalability of data mining algorithms, but it can also be used to improve the quality of the data mining results. In this dissertation we present a new optimization-based approach for instance selection that uses a genetic algorithm (GA) to select a subset of instances to produce a simpler decision tree with acceptable accuracy. The resultant trees are likely to be easier to comprehend and interpret by the decision maker and hence more useful in practice. We present numerical results for several difficult test datasets that indicate that GA-based instance selection can often reduce the size of the decision tree by an order of magnitude while still maintaining good prediction accuracy. The results suggest that GA-based instance selection works best for low entropy datasets. With higher entropy, there will be less benefit from instance selection. A comparison between GA and other heuristic approaches such as Rmhc (Random Mutation Hill Climbing) and simple construction heuristic, indicates that GA is able to obtain a good solution with low computation cost even for some large datasets. One advantage of instance selection is that it is able to increase the average instances associated with the leaves of the decision trees to avoid overfitting, thus instance selection can be used as an effective alternative to prune decision trees. Finally, the analysis on the selected instances reveals that instance selection helps to reduce outliers, reduce missing values, and select the most useful instances for separating classes

    Interactive visualization for knowledge discovery

    Get PDF

    Intermediate Decision Trees

    No full text
    Intermediate decision trees are the subtrees of the full (unpruned) decision tree generated in a breadth-first order. An extensive empirical investigation evaluates the classification error of intermediate decision trees and compares their performance to full and pruned trees. Empirical results were generated using C4.5 with 66 databases from the UCI machine learning database repository. Results show that when attempting to minimize the error of the pruned tree produced by C4.5, the best intermediate tree performs significantly better in 46 of the 66 databases. These and other results question the effectiveness of decision tree pruning strategies and suggest further consideration of the full tree and its intermediates. Also, the results reveal specific properties satisfied by databases in which the intermediate full tree performs best. Such relationships improve guidelines for selecting appropriate inductive strategies based on domain properties. 1 Introduction Numerous decision tree ..
    corecore