161 research outputs found

    Safety-critical scenarios and virtual testing procedures for automated cars at road intersections

    Get PDF
    This thesis addresses the problem of road intersection safety with regard to a mixed population of automated vehicles and non-automated road users. The work derives and evaluates safety-critical scenarios at road junctions, which can pose a particular safety problem involving automated cars. A simulation and evaluation framework for car-to-car accidents is presented and demonstrated, which allows examining the safety performance of automated driving systems within those scenarios. Given the recent advancements in automated driving functions, one of the main challenges is safe and efficient operation in complex traffic situations such as road junctions. There is a need for comprehensive testing, either in virtual testing environments or on real-world test tracks. Since it is unrealistic to cover all possible combinations of traffic situations and environment conditions, the challenge is to find the key driving situations to be evaluated at junctions. Against this background, a novel method to derive critical pre-crash scenarios from historical car accident data is presented. It employs k-medoids to cluster historical junction crash data into distinct partitions and then applies the association rules algorithm to each cluster to specify the driving scenarios in more detail. The dataset used consists of 1,056 junction crashes in the UK, which were exported from the in-depth On-the-Spot database. The study resulted in thirteen crash clusters for T-junctions, and six crash clusters for crossroads. Association rules revealed common crash characteristics, which were the basis for the scenario descriptions. As a follow-up to the scenario generation, the thesis further presents a novel, modular framework to transfer the derived collision scenarios to a sub-microscopic traffic simulation environment. The software CarMaker is used with MATLAB/Simulink to simulate realistic models of vehicles, sensors and road environments and is combined with an advanced Monte Carlo method to obtain a representative set of parameter combinations. The analysis of different safety performance indicators computed from the simulation outputs reveals collision and near-miss probabilities for selected scenarios. The usefulness and applicability of the simulation and evaluation framework is demonstrated for a selected junction scenario, where the safety performance of different in-vehicle collision avoidance systems is studied. The results show that the number of collisions and conflicts were reduced to a tenth when adding a crossing and turning assistant to a basic forward collision avoidance system. Due to its modular architecture, the presented framework can be adapted to the individual needs of future users and may be enhanced with customised simulation models. Ultimately, the thesis leads to more efficient workflows when virtually testing automated driving at intersections, as a complement to field operational tests on public roads

    Intelligent Model of Home Furnishing and Transportation Based on Improved RFID Web Fuzzy Clustering

    Get PDF
    This paper uses the fuzzy clustering method based on clustering path division, according to user access path. In the cluster, according to the access time and Agent classification of user, this method is then according to the user to all pages in order to visit division. The hardware of intelligent home furnishing controller by the advanced ARM9 embedded system, mobile phone module and RFID module. Intelligent transportation system through the sharing of traffic information, can realize the coordinated traffic signal control, effective traffic prediction and grooming. The paper presents intelligent model of home furnishing and transportation based on improved RFID web fuzzy clustering. Experiments show that RFID and fuzzy clustering can improve reliability of intelligent traffic and home furnishing and effectiveness

    New methods for discovering local behaviour in mixed databases

    Full text link
    Clustering techniques are widely used. There are many applications where it is desired to find automatically groups or hidden information in the data set. Finding a model of the system based in the integration of several local models is placed among other applications. Local model could have many structures; however, a linear structure is the most common one, due to its simplicity. This work aims at finding improvements in several fields, but all them will be applied to this finding of a set of local models in a database. On the one hand, a way of codifying the categorical information into numerical values has been designed, in order to apply a numerical algorithm to the whole data set. On the other hand, a cost index has been developed, which will be optimized globally, to find the parameters of the local clusters that best define the output of the process. Each of the techniques has been applied to several experiments and results show the improvements over the actual techniques.Barceló Rico, F. (2009). New methods for discovering local behaviour in mixed databases. http://hdl.handle.net/10251/12739Archivo delegad

    Fundamentals

    Get PDF
    Volume 1 establishes the foundations of this new field. It goes through all the steps from data collection, their summary and clustering, to different aspects of resource-aware learning, i.e., hardware, memory, energy, and communication awareness. Machine learning methods are inspected with respect to resource requirements and how to enhance scalability on diverse computing architectures ranging from embedded systems to large computing clusters

    Fundamentals

    Get PDF
    Volume 1 establishes the foundations of this new field. It goes through all the steps from data collection, their summary and clustering, to different aspects of resource-aware learning, i.e., hardware, memory, energy, and communication awareness. Machine learning methods are inspected with respect to resource requirements and how to enhance scalability on diverse computing architectures ranging from embedded systems to large computing clusters

    Improving a wireless localization system via machine learning techniques and security protocols

    Get PDF
    The recent advancements made in Internet of Things (IoT) devices have brought forth new opportunities for technologies and systems to be integrated into our everyday life. In this work, we investigate how edge nodes can effectively utilize 802.11 wireless beacon frames being broadcast from pre-existing access points in a building to achieve room-level localization. We explain the needed hardware and software for this system and demonstrate a proof of concept with experimental data analysis. Improvements to localization accuracy are shown via machine learning by implementing the random forest algorithm. Using this algorithm, historical data can train the model and make more informed decisions while tracking other nodes in the future. We also include multiple security protocols that can be taken to reduce the threat of both physical and digital attacks on the system. These threats include access point spoofing, side channel analysis, and packet sniffing, all of which are often overlooked in IoT devices that are rushed to market. Our research demonstrates the comprehensive combination of affordability, accuracy, and security possible in an IoT beacon frame-based localization system that has not been fully explored by the localization research community

    Analyses Of Crash Occurence And Injury Severities On Multi Lane Highways Using Machine Learning Algorithms

    Get PDF
    Reduction of crash occurrence on the various roadway locations (mid-block segments; signalized intersections; un-signalized intersections) and the mitigation of injury severity in the event of a crash are the major concerns of transportation safety engineers. Multi lane arterial roadways (excluding freeways and expressways) account for forty-three percent of fatal crashes in the state of Florida. Significant contributing causes fall under the broad categories of aggressive driver behavior; adverse weather and environmental conditions; and roadway geometric and traffic factors. The objective of this research was the implementation of innovative, state-of-the-art analytical methods to identify the contributing factors for crashes and injury severity. Advances in computational methods render the use of modern statistical and machine learning algorithms. Even though most of the contributing factors are known a-priori, advanced methods unearth changing trends. Heuristic evolutionary processes such as genetic programming; sophisticated data mining methods like conditional inference tree; and mathematical treatments in the form of sensitivity analyses outline the major contributions in this research. Application of traditional statistical methods like simultaneous ordered probit models, identification and resolution of crash data problems are also key aspects of this study. In order to eliminate the use of unrealistic uniform intersection influence radius of 250 ft, heuristic rules were developed for assigning crashes to roadway segments, signalized intersection and access points using parameters, such as \u27site location\u27, \u27traffic control\u27 and node information. Use of Conditional Inference Forest instead of Classification and Regression Tree to identify variables of significance for injury severity analysis removed the bias towards the selection of continuous variable or variables with large number of categories. For the injury severity analysis of crashes on highways, the corridors were clustered into four optimum groups. The optimum number of clusters was found using Partitioning around Medoids algorithm. Concepts of evolutionary biology like crossover and mutation were implemented to develop models for classification and regression analyses based on the highest hit rate and minimum error rate, respectively. Low crossover rate and higher mutation reduces the chances of genetic drift and brings in novelty to the model development process. Annual daily traffic; friction coefficient of pavements; on-street parking; curbed medians; surface and shoulder widths; alcohol / drug usage are some of the significant factors that played a role in both crash occurrence and injury severities. Relative sensitivity analyses were used to identify the effect of continuous variables on the variation of crash counts. This study improved the understanding of the significant factors that could play an important role in designing better safety countermeasures on multi lane highways, and hence enhance their safety by reducing the frequency of crashes and severity of injuries. Educating young people about the abuses of alcohol and drugs specifically at high schools and colleges could potentially lead to lower driver aggression. Removal of on-street parking from high speed arterials unilaterally could result in likely drop in the number of crashes. Widening of shoulders could give greater maneuvering space for the drivers. Improving pavement conditions for better friction coefficient will lead to improved crash recovery. Addition of lanes to alleviate problems arising out of increased ADT and restriction of trucks to the slower right lanes on the highways would not only reduce the crash occurrences but also resulted in lower injury severity levels

    Robust and Efficient Data Clustering with Signal Processing on Graphs

    Get PDF
    Data is pervasive in today's world and has actually been for quite some time. With the increasing volume of data to process, there is a need for faster and at least as accurate techniques than what we already have. In particular, the last decade recorded the effervescence of social networks and ubiquitous sensing (through smartphones and the Internet of Things). These phenomena, including also the progresses in bioinformatics and traffic monitoring, pushed forward the research on graph analysis and called for more efficient techniques. Clustering is an important field of machine learning because it belongs to the unsupervised techniques (i.e., one does not need to possess a ground truth about the data to start learning). With it, one can extract meaningful patterns from large data sources without requiring an expert to annotate a portion of the data, which can be very costly. However, the techniques of clustering designed so far all tend to be computationally demanding and have trouble scaling with the size of today's problems. The emergence of Graph Signal Processing, attempting to apply traditional signal processing techniques on graphs instead of time, provided additional tools for efficient graph analysis. By considering the clustering assignment as a signal lying on the nodes of the graph, one may now apply the tools of GSP to the improvement of graph clustering and more generally data clustering at large. In this thesis, we present several techniques using some of the latest developments of GSP in order to improve the scalability of clustering, while aiming for an accuracy resembling that of Spectral Clustering, a famous graph clustering technique that possess a solid mathematical intuition. On the one hand, we explore the benefits of random signal filtering on a practical and theoretical aspect for the determination of the eigenvectors of the graph Laplacian. In practice, this attempt requires the design of polynomial approximations of the step function for which we provided an accelerated heuristic. We used this series of work in order to reduce the complexity of dynamic graphs clustering, the problem of defining a partition to a graph which is evolving in time at each snapshot. We also used them to propose a fast method for the determination of the subspace generated by the first eigenvectors of any symmetrical matrix. This element is useful for clustering as it serves in Spectral Clustering but it goes beyond that since it also serves in graph visualization (with Laplacian Eigenmaps) and data mining (with Principal Components Projection). On the other hand, we were inspired by the latest works on graph filter localization in order to propose an extremely fast clustering technique. We tried to perform clustering by only using graph filtering and combining the results in order to obtain a partition of the nodes. These different contributions are completed by experiments using both synthetic datasets and real-world problems. Since we think that research should be shared in order to progress, all the experiments made in this thesis are publicly available on my personal Github account

    Contributions to time series data mining departing from the problem of road travel time modeling

    Get PDF
    194 p.Bidaiarientzako Informazio Sistema Aurreratuak (BISA) errepideetan sensoreenbidez bildutako datuak jaso, prozesatu eta jakitera ematen dituzte,erabiltzailei haien bidaietan lagunduz eta ibilbidea hasi baino lehen eta bideanhartu beharreko erabakiak erraztuz [5]. Helburu honetarako, BISA sistemektrafiko ereduak beharrezkoak dituzte, bidaiarientzat baliagarriak izandaitezkeen trafiko aldagaiak deskribatu, simulatu eta iragartzeko balio duelako.Zehazki, kontutan hartu daitezkeen trafiko aldagai guztietatik (fluxua,errepidearen okupazioa, abiadurak, etab.) bidai denbora da erabiltzaileentzatintuitiboena eta ulerterrazena den aldagaia eta, beraz, BISA sistemetan garrantziberezia hartzen duena [6]. Bidai denbora, aurrez zehaztutako puntubatetik bestera joateko ibilgailu batek behar duen denborari deritzo.Bidai denboren eredugintzan bi problema nagusi bereizten dira: estimazioaeta iragarpena. Nahiz eta literaturan batzuetan bi kontzeptu hauek baliokidetzatjo, berez, bi problema bereizi dira, ezaugarri eta helburu ezberdinekin,eta teknika ezberdinak eskatzen dituztenak.Alde batetik, bidai denboren estimazioaren helburua iada amaitutakobidaietan ibilgailuak bataz beste zenbat denbora igaro duten kalkulatzeada. Horretarako, ibilbidean zehar jasotako trafikoari buruzko informazioaedo/eta bestelako datuak (eguraldia, egutegiko informazioa, etab.) erabildaitezke [1]. Estimazio metodo ezberdinak eskuragarri dauden datu motaeta kantitatearen araberara sailka daitezke eta, a posteriori motako balorazioakegiteko balio dute. Bestalde, bidai denboren iragarpena, orainean edoetorkizunean hasiko diren bidaien denborak kalkulatzean datza. Honetarako,iragarpena egiten den momentuan jasotako eta iraganeko trafikoari buruzkodatuak eta testuinguruko informazioa erabiltzen da [8].Ibilgailu kopuru eta auto-ilaren ugaritzeen ondorioz, bidai denboren estimazioeta predikzio onak lortzea geroz eta beharrezkoagoa da, trafikoarenkudeaketa egokia ahalbidetzen duelako. Hau ikusirik, azken urteetan eredumota ezberdin andana proposatu eta argitaratu dira. Nolanahi ere, literaturarenberrikuspen eta analisi sakon bat egin dugu tesi honen lehenengoatalean. Bertan, ondorioztatu ahal izan dugu proposatutako eredu guztiakez direla egokiak errepide sare, trafiko egoera eta datu mota guztiekin erabiltzeko.Izan ere, atera dugun ondorio nabariena, argitaratutako eredu askokez dituztela BISA sistemen eskakizun praktikoak betetzen, da. Lehenik etabehin, eredu asko errepide zati txikietan soilik aplika daitezke, eta ez dagoargi errepide sare guztira nola hedatu daitezkeen. Bestalde, eredu gehienekdatu mota bakarra erabiltzen dute eta errealitatean ohikoa da datu mota batekinbaina gehiagorekin lan egin behar izatea. Azkenik, pilaketa ez-ohikoenaurrean malgutasun mugatua izatea ere desabantaila nabari eta ohikoa da.Hau honela, eredu konbinatu edo hibridoak proposamen hauetatik guztietatiketorkizun handiena dutenak direla dirudi, patroi ezberdinetara moldatzekogaitasuna dutelako, eta eredu eta datu mota ezberdinak nahastekoaukera ematen dutelako.Tesi honetan, bidai denborak iragartzeko eredu hibrido edo konbinatuakhartuko ditugu abiapuntutzat. Zehazki, hasieran datuak antzekotasunarenarabera multzokatzen dituenetan jarriko dugu arreta. Metodo hauek, datuakmultzokatu ondoren, multzo bakoitzari bidai denborak iragartzeko eredu ezberdinbat aplikatzen diote, zehatzagoa eta patroi espezifiko horrentzat espresukieraikia.Eredu talde honen kasu berezi bat, datuen multzokatzea denbora serieentaldekatzearen bitartez egiten duena da. Denbora serieen taldekatzea (clustering-a ingelesez) datu mehatzaritzako gainbegiratu gabeko ataza bat da, nonhelburua, denbora serie multzo, edo beste era batera esanda, denbora seriedatu base bat emanik, serie hauek talde homogeneoetan banatzea den [3]. Xedea,beraz, talde bereko serieen antzekotasuna ahalik eta handiena izatea etaaldiz, talde ezberdinetako serieak ahalik eta desberdinenak izatea da. Trafikodatuetan eta bidai denboretan, portaera ezberdinetako egunak aurkitzea osoohikoa da (adib. asteguna eta asteburuak). Hau honela, egun osoan zeharjasotako bidai denborez osatutako serie bat izanik, metodo mota honek lehenik,dagokion egun mota identifikatuko luke eta ondoren iragarpenak egunmota horretarako bereziki eraikitako eredu batekin lortuko lituzke.Denbora serieen clustering-an oinarritutako eredu mota hau ez da ia inoizerabili literaturan eta, ondorioz, bere onurak eta desabantailak ez dira ondoaztertu orain arte. Honegatik, tesi honen bigarren kapituluan, eredugintzaprozeduaren hasieran egun mota ezberdinak identifikatzea bidai denboreniragarpenak lortzeko lagungarria ote den aztertu dugu, emaitza positiboaklortuz. Hala ere, praktikan, honelako eredu konbinatuak eraikitzeak eta erabiltzeakzailtasun bat baino gehiago dakartza. Tesi honetan bi arazo nagusietanjarriko dugu arreta eta hauentzat soluzio bana proposatzea izango duguhelburu.Hasteko, denbora serieak multzokatzeko, erabaki ez tribial batzuk hartubehar dira, adibidez distantzia funtzio egoki bat aukeratzea. Literaturanbehin baino gehiagotan erakutsi da erabaki hau oso garrantzitsua dela etaasko baldintzatzen dituela lortuko diren emaitzak [7]. Trafikoko kasuan ere,hau honela dela demostratu dugu. Baina distantzia baten aukeraketa ez dabatere erraza. Azken urteotan hamaika distantzia ezberdin proposatu dituikerlari komunitateak denbora serieekin lan egiteko eta, dirudienez, datu basebakoitzaren ezaugarrien arabera, bat ala bestea izaten dela egokiena [3, 7].Guk dakigula, ez dago metodologia formalik erabiltzaileei aukeraketa hauegiten laguntzen dionik, ez batik bat denbora serieen clustering-aren testuinguruan.Metodologia ohikoena distantzia sorta bat probatzea eta lortutakoemaitzen arabera bat aukeratzea da. Zoritxarrez, distantzia batzuen kalkuluakonputazionalki oso garestia da, eta beraz, estrategia hau ez da batereeraginkorra praktikan.Ataza hau simplifikatzeko asmoarekin, tesiko hirugarren kapituluan etiketaanitzeko sailkatzaile bat (ingelesez multi-label classifier ) proposatzen dugudenbora serieen datu base bat multzokatzeko, distantzia egokiena modu automatikoanaukeratzen duena. Sailkatzaile hau eraikitzeko, hasteko, denboraserie datu base baten alderdi batzuk deskribatzeko ezaugarri sorta bat definitudugu. Besteak beste, datuetan dagoen zarata maila, autokorrelazio maila,serie atipikoen kopurua, periodizitatea eta beste hainbat ezaugarri neurtu etakuantifikatzeko metodoak proposatu ditugu. Ezaugarri hauek sailkatzaileakbehar duen input informazioa edo, bestela esanda, sailkatzailearen menpekoaldagaiak izango dira. Emaitza gisa, sailkatzaileak datu base batentzategokienak diren distantziak itzuliko dizkigu, kandidatu sorta batetik, noski.Sailkatzaile honen baliagarritasuna egiaztatzeko, esperimentu sorta zabalbat bideratu dugu, bai lan honetarako bereziki sortutako datu base sintetikoekineta bai UCR artxiboko [4] benetako datuak erabiliz. Lortutako emaitzapositiboak argi uzten dute proposatutako sailkatzaileak denbora serie batmultzokatzeko distantzia funtzio baten aukeraketa errazteko balio duela.Ekarpen hau azalduta, berriz bidai denboren iragarpenerako eredu kon-binatuetara itzuli eta bigarren problema bat identifikatzen dugu, tesiko bigarrenekarpen nagusira eramango gaituena. Gogoratu eredu konbinatu hauekhasiera batean datuak multzokatzen dituztela, clustering algoritmoak erabiliz.Talde bakoitzak patroi edo trafiko portaera ezberdin bat adieraziko du.Ondoren, talde bakoitzean iragarpenak egiteko, iragarpen eredu ezberdin bateraikiko dugu, soilik multzo horretako datu historikoak erabiliz. Gure kasuan,denbora serieen clustering-a aplikatu dugu eta beraz, egun mota ezberdinaklortuko ditugu. Ondoren, iragarpen berriak egin nahi ezkero, egun berri bathasten denean, zein multzokoa den asmatu beharko dugu, erabili behar duguneredua aukeratzeko.Ohartu, iragarpenak egiteko garaian, ez dugula egun osoko daturik izangoeskuragarri. Adibidez, goizeko hamarretan, eguerdiko hamabietan (2 ordugeroago) puntu batetik bestera joateko beharko dugun denbora iragarri nahibadugu, soilik egun horretan hamarrak arte jasotako informazioa izango dugueskuragarri, informazio historikoarekin batera, noski. Egoera honetan, egunhorretako informazio partzialarekin, seriearen lehen zatiarekin soilik, erabakibehar dugu zein multzotakoa den. Noski, ordurarte jasotako informazioa ezbada nahikoa adierazgarria, kalterako izan daiteke multzo eta eredu zehatzbat aukeratzea, eta ziurrenik hobe izango da eredu orokorrago bat erabiltzea,datu historiko guztiekin eraikia. Finean, egun berriak ahal bezain prontomultzo batera esleitu nahi ditugu, baina esleipen hauetan ahal bezain erroregutxien egin nahi dugu.Logikoa da pentsatzea esleipenak geroz eta lehenago eginez akatsak egitekoaukera handiagoa dela. Hau honela, helburua esleipenak ahal bezain azkaregitea da, baina zehaztasun maila onargarri bat bermatuz. Denbora serieenmehatzaritzan problema honi denbora serieen sailkapen goiztiarra (ingelesezearly classification of time series) deritzo [10].Denbora serieen sailkapena (ingelesez time series classification) [9, 10] datumehatzaritzako gainbegiratutako problema aski ezaguna da non, denboraserie multzo bat eta haietako bakoitzaren klasea jakinik, helburua sailkatzailebat eraikitzea den, serie berrien klaseak iragartzeko gai dena.Denbora serieen sailkapenaren azpi-problema gisa, sailkapen goiztiarra,denboran zehar iristen den datu zerrenda bat ahalik eta lasterren klase zehatzbatean sailkatzeko nahia edo beharra dagoenean agertzen da [10]. Adibide gisa,informatika medikoan, gaixoaren datu klinikoak denboran zehar monitorizatueta jasotzen dira, eta gaixotasun batzuen detekzio goiztiarra erabakigarriada pazientearen egoeran. Esaterako, arterien buxadura, fotopletismografia(PPG) serieen bidez detektatzen da errazen [2], baina diagnosian segunduhamarren baten atzerapenak, guztiz ondorio ezberdinak ekar ditzake.Honela, tesiaren 4. kapituluan, denbora serieen datu mehatzaritzari bigarrenekarpen garrantzitsu bezala, ECDIRE (Early Classification frameworkfor time series based on class DIscriminativeness and REliability ofpredictions) izeneko denbora serieen sailkatzaile goiztiarra aurkeztu dugu.Sailkatzaile hau eraikitzeko, entrenamendu fasean, metodoak klase bakoitzaanalizatzen du eta beste klaseengandik noiztik aurrera ezberdindu daitekeenkalkulatzen du, aurrez ezarritako zehaztasun maila bat mantenduz,noski. Zehaztasun maila hau erabiltzaileak finkatuko du haren interesen arabera.Entrenamentu fase honetan lortutako informazioak sailkapenak noizegin zehaztuko digu eta, beraz, serieak goizegi esleitzea saihesten lagundukodu. Bestalde, ECDIRE metodoak sailkatzaile probabilistikoak erabiltzen ditu,eta sailkatzaile mota hauengandik lortutako a-posteriori probabilitateak,lortutako sailkapenen zehaztasuna beste era batean kontrolatzen lagundukodigu.ECDIRE metodoa UCR artxiboko 45 datu baseei aplikatu diogu, literaturanorain arte lortutako emaitzak hobetuz. Bestalde, kasu erreal bateanmetodoaren aplikazioa nolakoa izango zen erakusteko, kantuen bidezko txoriendetekzio eta identifikazio problema baterako sortutako datu base batekinere burutu ditugu esperimentuak, emaitza egokiak lortuz.Azkenik, berriro ere bidai denboren iragarpenera itzuli gara eta aurrekobi ekarpenak problema honi aplikatu dizkiogu. Lortutako emaitzetatik,problema zehatz honetarako, proposatutako bi metodoetan egin beharrekomoldaketa batzuk identifikatu ditugu. Hasteko, distantzia aukeratzeaz gain,hauen parametroak ere aukeratu behar dira. Hau egiteko silhouette bezalakoindizeak erabili ditugu, baina argitzeke dago ea metodo hau ataza honetarakoonena den. Bestalde, datuen garbiketa eta aurre-prozesatze sakon bat beharrezkoadela ere ikusi dugu, serie atipikoak eta zaratak clustering soluzioetaneragin handia baitaukate. Azkenik, gure esperimentuak iragarpen eredu historikosimpleetan oinarritu ditugu. Eredu simple hauek ordu berdinean jasotakobidai denboren batez bestekoa kalkulatuz egiten dituzte iragarpenak,eta eredu konplexuagoak erabiltzea aukera interesgarria izan daiteke.Laburbilduz, tesi honetan bidai denboren eredugintzaren literaturarenanalisi batetik hasi gara eta, bertatik abiatuta, denbora serieen mehatzaritzaribi ekarpen egin dizkiogu: lehena, denbora serie multzo bat taldekatzekodistantzia automatikoki aukeratzeko metodo baten diseinua, eta bigarrena,sailkatzaile probabilistikoetan oinarritutako denbora serieen sailkatzaile goiztiarbat. Azkenik, berriro ere bidai denboren eredugintzaren problemara itzuligara eta aurreko bi ekarpenak testuinguru honetan aplikatuko ditugu, etorkizunerakoikerketa ildo berriak zabalduz

    Secondary Analysis of Electronic Health Records

    Get PDF
    Health Informatics; Ethics; Data Mining and Knowledge Discovery; Statistics for Life Sciences, Medicine, Health Science
    corecore