3 research outputs found

    Transport dynamics analysis in ferromagnetic heterojunction using Raman spectroscopy and magnetic force microscopy

    Get PDF
    AbstractThe ZnO/La0.7Sr0.3MnO3 thin film was epitaxially fabricated on LaAlO3 (100) by pulse laser deposition. The Raman scattering on the single layer LaSrMnO and junction ZnO/La0.7Sr0.3MnO3 was investigated in a giant softening by 490cm−1 John-Teller, 620 and 703cm−1 optical phonon modes. The Raman spectra LaSrMnO and ZnO/La0.7Sr0.3MnO3 were observed with distinct features, i.e., the thickness was in dependent of frequency and intensity. The dynamics results showed that the spin–orbital coupling was caused by anomalies tilt of MnO6 octahedron. The LSMO/ZnO junction exhibited excellent junction positive magneto-resistance behavior in the temperature range of 77–300K. The kinetic energy gain was achieved by orbital competition, strong crystal field and charge order of energy band splitting. The transport orbits were in the environment of the ferromagnetic-orbital ordering. The structures of barriers could be adjusted by junction interface and domain boundary condition in terms of the presence of spin–orbital fluctuating

    Performance of the fixed-point autoencoder

    Get PDF
    Model autodavača (autoencodera) je jedan od najtipičnijih modela temeljitog učenja koji se najčešće koriste u učenju neupravljačkog obilježja za mnoge aplikacije kao što su prepoznavanje, identifikacija i pretraživanje. Algoritmi autodavača predstavljaju opsežne računarske zadatke. Stvaranje opsežnog modela autodavača može zadovoljiti potrebe u analizi ogromnog broja podataka. Međutim, vrijeme učenja katkada postaje nepodnošljivo, što dovodi do potrebe istraživanja nekih platformi hardvera za ubrzavanje, kao što je FPGA. Verzije softvera autodavača često koriste izraze jednostruke ili dvostruke preciznosti. Ali implementiranje jedinica s promjenjivom točkom je vrlo skupo za postavljanje u FPGA. Kod implementacije autodavača na hardver stoga se često primjenjuje aritmetika nepromjenjive točke. No često se zanemaruje gubitak točnosti i nije proučavan u ranijim radovima. Ima tek nekoliko radova koji se bave akceleratorima koji koriste fiksne širine bita na drugim modelima neuronskih mreža. U našem se radu daje opsežna procjena prikaza preciznosti implikacija nepromjenjive točke na autodavač, postizanje najbolje značajke i područja učinkovitosti. Metoda konverzije formata podataka, metode blokiranja matrice i aproksimacija kompleksnim funkcijama predstavljaju ključne razmatrane čimbenike u skladu s mjestom implementacije hardvera. U radu se procjenjuju metoda simulacije konverzije podataka, blokiranje matrice različitim paralelizmom i jednostavna metoda evaluacije. Rezultati su pokazali da je širina bita s nepromjenjivom točkom uistinu utjecala na učinkovitost autodavača. Višestruki čimbenici mogu postići suprotan učinak. Svaki čimbenik može imati dvostruki učinak odbacivanja "brojnih" informacija i "korisnih" informacija u isto vrijeme. Područje predstavljanja treba pažljivo odabrati u skladu s računarskim paralelizmom. Rezultat je također pokazao da se primjenom aritmetike nepromjenjive točke može garantirati preciznost algoritma autodavača i postići prihvatljiva brzina konvergencije.The model of autoencoder is one of the most typical deep learning models that have been mainly used in unsupervised feature learning for many applications like recognition, identification and mining. Autoencoder algorithms are compute-intensive tasks. Building large scale autoencoder model can satisfy the analysis requirement of huge volume data. But the training time sometimes becomes unbearable, which naturally leads to investigate some hardware acceleration platforms like FPGA. The software versions of autoencoder often use single-precision or double-precision expressions. But the floating point units are very expensive to implement on FPGA. Fixed-point arithmetic is often used when implementing autoencoder on hardware. But the accuracy loss is often ignored and its implications for accuracy have not been studied in previous works. There are only some works focused on accelerators using some fixed bit-widths on other neural networks models. Our work gives a comprehensive evaluation to demonstrate the fix-point precision implications on the autoencoder, achieving best performance and area efficiency. The method of data format conversion, the matrix blocking methods and the complex functions approximation are the main factors considered according to the situation of hardware implementation. The simulation method of the data conversion, the matrix blocking with different parallelism and a simple PLA approximation method were evaluated in this paper. The results showed that the fixed-point bit-width did have effect on the performance of autoencoder. Multiple factors may have crossed effect. Each factor would have two-sided impacts for discarding the "abundant" information and the "useful" information at the same time. The representation domain must be carefully selected according to the computation parallelism. The result also showed that using fixed-point arithmetic can guarantee the precision of the autoencoder algorithm and get acceptable convergence speed
    corecore