34 research outputs found

    DP and mandatory determiners in article-less Serbo-Croatian

    Get PDF
    Although it lacks grammaticalized categories of definite and indefinite articles and its “bare” nouns are (usually) ambiguous between a definite and indefinite interpretation, Serbo-Croatian has appropriate lexical items for marking discourse-old and discourse-new nominal referents. I demonstrate that there are contexts in which use of these discourse markers is obligatory for obtaining the intended reading, as the “bare” nominal phrase would unambiguously be interpreted as definite or indefinite, depending on the context. More importantly, when present, these discourse markers block left branch and adjunct extractions from the rest of the NP, indicating that a determiner phrase might be projected, even in an article less language

    DP and mandatory determiners in article-less Serbo-Croatian

    Get PDF
    Although it lacks grammaticalized categories of definite and indefinite articles and its “bare” nouns are (usually) ambiguous between a definite and indefinite interpretation, Serbo-Croatian has appropriate lexical items for marking discourse-old and discourse-new nominal referents. I demonstrate that there are contexts in which use of these discourse markers is obligatory for obtaining the intended reading, as the “bare” nominal phrase would unambiguously be interpreted as definite or indefinite, depending on the context. More importantly, when present, these discourse markers block left branch and adjunct extractions from the rest of the NP, indicating that a determiner phrase might be projected, even in an article less language

    On-line blind separation of non-stationary signals

    Get PDF
    This paper addresses the problem of blind separation of non-stationary signals. We introduce an on-line separating algorithm for estimation of independent source signals using the assumption of non-stationary of sources. As a separating model, we apply a self-organizing neural network with lateral connections, and define a contrast function based on correlation of the network outputs. A separating algorithm for adaptation of the network weights is derived using the state-space model of the network dynamics, and the extended Kalman filter. Simulation results obtained in blind separation of artificial and real-world signals from their artificial mixtures have shown that separating algorithm based on the extended Kalman filter outperforms stochastic gradient based algorithm both in convergence speed and estimation accuracy

    COMPARATIVE ANALYSIS OF ATTERBERG’S LIMITS OF FINE-GRAINED SOIL DETERMINED BY VARIOUS METHODS

    Get PDF
    Determination of the Atterberg’s limits is necessary for the classification of fine-grained soil. That limits can be determined according to the valid standard SRPS EN ISO 17892-12. Two methods are prescribed by the standard for determining the liquid limit: the Casagrande cup and the Fall Cone test, and one method for determining the plasticity limit: the thread-rolling method. In this paper the Fall Cone method was also used as an alternative method to determine the plastic limit. Ten samples of various fine-grained materials, originating from the wider area of the city of Niš, were tested. The classification of all samples was performed based on the results obtained by the methods prescribed by the standard and alternative methods. Comparative analysis shows that the results obtained by applying standard and alternative methods are close, but also that the scattering of results obtained by the Fall Cone method is significantly less, whereas the reproducibility is higher

    Implementation of deep models with arbitrary differentiable transformations

    No full text
    Modeli dubokog ucenja se iz dana u dan razvijaju, kao što i ljudi u ˇ ce iz dana u dan, ˇ tako i racunala simuliraju ljudski mozak. ˇ Povratne neuronske mreže uce sekvencijalne podatke kojima je poredak bitan, te na ˇ temelju njih daju predikcije. Osnovni model jest obicni povratni model, RNN, koji kroz svoje ˇ celije prenosi in- ´ formacije buducim´ celijama o položaju i vrijednostima svih podataka. Tu se javlja ´ problem eksplodirajuceg i nestaju ´ ceg gradijenta, stoga se u praksi ne koristi ´ cesto. ˇ Celija s dugoro ´ cnom memorijom LSTM, nadogradnja je RNN modela, koja ima du- ˇ gorocno pam ˇ cenje, gdje svaka ´ celija filtrira i ostavlja bitne informacije. ´ U slucaju vlastite implementacije jednih od ovakvih ˇ celija, mogu se koristiti ru ´ cno ra- ˇ dene ¯ celije koja ´ ce imati dovoljno druga ´ cija svojstva od uobi ˇ cajenih dobro poznatih ˇ celija. ´ Celije se mogu lagano napisati u Pythonu korištenjem alata PyTorch, no želimo ´ li optimizirati dijelove koda možemo koristiti rucno pisane ekstenzije u C++ koje lako ˇ možemo povezati sa Pythonom te koristiti kao nativne metode Python biblioteka. No, Python biblioteke su iznimno dobro optimizirane, stoga pisanje takvih ekstenzija ima smisla raditi samo kada je velika kolicina posla u pitanju te ako možemo napisati ˇ kvalitetan i optimiziran kod.Deep learning models are developing day by day, just as people learn day by day, so computers simulate the human brain. Recurrent neural networks learn sequential data in which the order is important, and based on them make predictions. The basic model is an recurrent neural network model, RNN, which through its cells transmits information to future cells about the position and values of all it’s data. This is where the problem of exploding and disappearing gradients arises, so it is not often used in practice. Cell with long short-term memory LSTM, is an upgrade of the RNN model, which has a long-term memory, where each cell filters and leaves essential information. In the case of your own implementation of one of these cells, you can use hand-made cells that will have sufficiently different properties from the usual well-known cells. Cells can be easily written in Python using the PyTorch tool, but if we want to optimize parts of the code, we can use hand-written extensions in C++ that we can easily connect to Python and use as native methods of Python libraries. However, Python libraries are extremely well optimized, so writing such extensions makes sense only when a large amount of work is involved and if we can write highquality and optimized code

    Implementation of deep models with arbitrary differentiable transformations

    No full text
    Modeli dubokog ucenja se iz dana u dan razvijaju, kao što i ljudi u ˇ ce iz dana u dan, ˇ tako i racunala simuliraju ljudski mozak. ˇ Povratne neuronske mreže uce sekvencijalne podatke kojima je poredak bitan, te na ˇ temelju njih daju predikcije. Osnovni model jest obicni povratni model, RNN, koji kroz svoje ˇ celije prenosi in- ´ formacije buducim´ celijama o položaju i vrijednostima svih podataka. Tu se javlja ´ problem eksplodirajuceg i nestaju ´ ceg gradijenta, stoga se u praksi ne koristi ´ cesto. ˇ Celija s dugoro ´ cnom memorijom LSTM, nadogradnja je RNN modela, koja ima du- ˇ gorocno pam ˇ cenje, gdje svaka ´ celija filtrira i ostavlja bitne informacije. ´ U slucaju vlastite implementacije jednih od ovakvih ˇ celija, mogu se koristiti ru ´ cno ra- ˇ dene ¯ celije koja ´ ce imati dovoljno druga ´ cija svojstva od uobi ˇ cajenih dobro poznatih ˇ celija. ´ Celije se mogu lagano napisati u Pythonu korištenjem alata PyTorch, no želimo ´ li optimizirati dijelove koda možemo koristiti rucno pisane ekstenzije u C++ koje lako ˇ možemo povezati sa Pythonom te koristiti kao nativne metode Python biblioteka. No, Python biblioteke su iznimno dobro optimizirane, stoga pisanje takvih ekstenzija ima smisla raditi samo kada je velika kolicina posla u pitanju te ako možemo napisati ˇ kvalitetan i optimiziran kod.Deep learning models are developing day by day, just as people learn day by day, so computers simulate the human brain. Recurrent neural networks learn sequential data in which the order is important, and based on them make predictions. The basic model is an recurrent neural network model, RNN, which through its cells transmits information to future cells about the position and values of all it’s data. This is where the problem of exploding and disappearing gradients arises, so it is not often used in practice. Cell with long short-term memory LSTM, is an upgrade of the RNN model, which has a long-term memory, where each cell filters and leaves essential information. In the case of your own implementation of one of these cells, you can use hand-made cells that will have sufficiently different properties from the usual well-known cells. Cells can be easily written in Python using the PyTorch tool, but if we want to optimize parts of the code, we can use hand-written extensions in C++ that we can easily connect to Python and use as native methods of Python libraries. However, Python libraries are extremely well optimized, so writing such extensions makes sense only when a large amount of work is involved and if we can write highquality and optimized code

    Public-private partnership as a driver of local development

    No full text
    Cilj ovog rada bio je dati prikaz javno–privatnog partnerstva u funkciji lokalnog razvoja. U radu se daje prikaz stanja sustava lokalne i regionalne (područne) samouprave u Republici Hrvatskoj koji se sastoji od 428 općina, 127 gradova i 20 županija. Cilj lokalnog razvoja je poboljšanje kvalitete života lokalnog područja, razvijanje kapaciteta lokalne zajednice, razvijanje poduzetništva i prevladavanje nedostataka tržišta. Važan čimbenik za osiguravanje održivog razvoja zemlje je stvaranje uvjeta za ujednačeni ekonomski razvoj svih regija. Jamstvo za ovo je razvoj lokalne javne infrastrukture i pružanja srodnih usluga. Zbog toga razina razvoja lokalne infrastrukture postaje presudni faktor lokalnog razvoja. Javno–privatno partnerstvo može biti važan čimbenik u pokretanju lokalnog razvoja jer može osigurati kontinuitet lokalnih javnih investicija

    Implementation of deep models with arbitrary differentiable transformations

    No full text
    Modeli dubokog ucenja se iz dana u dan razvijaju, kao što i ljudi u ˇ ce iz dana u dan, ˇ tako i racunala simuliraju ljudski mozak. ˇ Povratne neuronske mreže uce sekvencijalne podatke kojima je poredak bitan, te na ˇ temelju njih daju predikcije. Osnovni model jest obicni povratni model, RNN, koji kroz svoje ˇ celije prenosi in- ´ formacije buducim´ celijama o položaju i vrijednostima svih podataka. Tu se javlja ´ problem eksplodirajuceg i nestaju ´ ceg gradijenta, stoga se u praksi ne koristi ´ cesto. ˇ Celija s dugoro ´ cnom memorijom LSTM, nadogradnja je RNN modela, koja ima du- ˇ gorocno pam ˇ cenje, gdje svaka ´ celija filtrira i ostavlja bitne informacije. ´ U slucaju vlastite implementacije jednih od ovakvih ˇ celija, mogu se koristiti ru ´ cno ra- ˇ dene ¯ celije koja ´ ce imati dovoljno druga ´ cija svojstva od uobi ˇ cajenih dobro poznatih ˇ celija. ´ Celije se mogu lagano napisati u Pythonu korištenjem alata PyTorch, no želimo ´ li optimizirati dijelove koda možemo koristiti rucno pisane ekstenzije u C++ koje lako ˇ možemo povezati sa Pythonom te koristiti kao nativne metode Python biblioteka. No, Python biblioteke su iznimno dobro optimizirane, stoga pisanje takvih ekstenzija ima smisla raditi samo kada je velika kolicina posla u pitanju te ako možemo napisati ˇ kvalitetan i optimiziran kod.Deep learning models are developing day by day, just as people learn day by day, so computers simulate the human brain. Recurrent neural networks learn sequential data in which the order is important, and based on them make predictions. The basic model is an recurrent neural network model, RNN, which through its cells transmits information to future cells about the position and values of all it’s data. This is where the problem of exploding and disappearing gradients arises, so it is not often used in practice. Cell with long short-term memory LSTM, is an upgrade of the RNN model, which has a long-term memory, where each cell filters and leaves essential information. In the case of your own implementation of one of these cells, you can use hand-made cells that will have sufficiently different properties from the usual well-known cells. Cells can be easily written in Python using the PyTorch tool, but if we want to optimize parts of the code, we can use hand-written extensions in C++ that we can easily connect to Python and use as native methods of Python libraries. However, Python libraries are extremely well optimized, so writing such extensions makes sense only when a large amount of work is involved and if we can write highquality and optimized code

    Sintaksa i semantika određenog i neodređenog pridevskog vida u srpskom jeziku

    Get PDF
    Predmet disertacije je sintaksa i semantika adnominalnog pridevskog vida u srpskom jeziku. Primenjena je kartografska generativnosintaksička i formalnosemantička metodologija u modelovanju empirijskih činjenica, prikupljenih ekscerpiranjem iz književnoumetničkih tekstova iz 19. i 20. veka, dnevne štampe, sa Interneta, te pomoću sudova gramatičnosti na osnovu jezičke intuicije izvornih govornika. Iako se često u lingvističkoj literaturi proglašava gotovo neutralisanom, utvrdili smo da se kategorijom pridevskog vida u srpskom jeziku markiraju (ne)određenost (starost u diskursu), (ne)jedinstvenost, epistemička (ne)specifičnost, vremenski lokalna ili globalna interpretacija pridevske modifikacije, kvantifikativna i generička (ne)restriktivnost, te supsektivnost ili intersektivnost. Određeni vid prideva je dvosmislen u pogledu pobrojanih kategorija, dok se neodređenim vidom prideva obeležavaju neodređenost, nejedinstvenost, epistemička nespecifičnost, vremenski lokalno čitanje, kvantifikativna i generička restriktivnost, te supsektivnost pridevske modifikacije. Iako morfološki markiran, određeni vid je semantički markiran, dok se morfološki neobeleženi neodređeni vid ponaša kao semantički markirana kategorija. Postuliran je odgovarajući sintaksički model, koji sadrži dva domena podeljene determinatorske fraze (DP) – domen klase / vrste, lociran ispod funkcionalne projekcije partivnosti, i domen individue / objekta, koji se nalazi iznad projekcije za kardinalni broj. Predložena je analiza u kojoj se određeni vid prideva dobija specifičnim slaganjem prideva sa referencijalnim obeležjima određenosti ili specifičnosti u nekom od dvaju DP domena, čime je objašnjena dvosmislenost određenog vida. Prikrivenim pomeranjem prideva u tzv. logičkoj formi (LF) i vezivanjem za operator neke više funkcionalne projekcije, poput projekcije subjektivne evaluacije govornika, kvantifikatorske ili projekcije aspektualne komponente glagola, moguće je da pridev izađe iz domena DP-ja, čime izbegava slaganje i ostaje u neodređenom vidu. Ovako je interpretirana pojava neodređenog vida prideva u određenom (diskurs-starom), jedinstvenom ili epistemički specifičnom okruženju, u prisustvu kvantifikatora i kod vremenski lokalnog čitanja modifikacije.The subject of the dissertation is syntax and semantics of adnominal adjective aspect in Serbian language. A cartographic generative syntactical and formal semantics’ methodology have been applied in modeling of empirical facts, collected by excerpting from 19th and 20th century literature texts, daily press, the Internet and by means of grammaticality judgments based on language intuition of native speakers. Although in linguistic literature it is often claimed to be almost neutralized, we determined that adjective aspect can mark (in)definiteness (discourse-giveness), (non-)uniqueness, epistemic (non-)specificity, stage- / individual-level interpretation, quantifier and generic (non-)restrictiveness, as well as subsectivity / intersectivity. The definite adjective aspect is ambiguous regarding all of these categories, while indefinite adjective aspect can mark indefiniteness, non-uniqueness, epistemic non-specificity, stage-level interpretation, quantifier and generic restrictiveness, and subsectivity. Although morphologically marked, definite adjective aspect is semantically non-marked, while the morphologically non-marked indefinite adjective aspect behaves as the semantically marked form. An appropriate syntactic model has been postulated, which contains two split-DP domains – kind DP, located beneath the partitive projection, and individual DP, above the cardinality projection. In the proposed analysis, definite adjective aspect is a result of a specific type of agreement between the adjective and the referential features of definiteness or specificity in either of the two DP domains, which explains the ambiguity of the definite adjective aspect. By a covert movement of the adjective in LF and binding to an operator of a higher functional projection, such as speaker’s subjective evaluation projection, quantifier or verb aspect projection, it is possible for the adjective to exit the DP domain, escape the agreement and stay in the form of indefinite adjective aspect. This is how we interpreted indefinite adjective aspect in definite, unique and specific contexts, in the presence of a quantifier and in stage-level interpretation
    corecore