1,317 research outputs found

    Supply Chain Information Operation Model for Lifelong Learning in the National Credit Bank Format

    Get PDF
    The objectives of this research to develop supply chain information operation model for lifelong learning in the national credit bank format and to evaluate the model The sample groups were ten experts in supply chain. The research tool was evaluation form to evaluate supply chain information operation model for lifelong learning in the national credit bank format comprises five main components, namely Suppliers, Manufacturer, education customers and The data analysed by using arithmetic mean and standard deviation The overall evaluation result about supply chain information operation model for lifelong learning in the national credit bank format was a high level. suggesting that supply chain information operation model for lifelong learning in the national credit bank format to support sustainable Information system development

    Methodology of information operation

    Get PDF
    Informacijske operacije i operacije utjecaja odnosno upravljanje percepcijom (specijalne operacije) akcije su manipulacija informacijama pomoću kojih se osobe prema kojima su usmjerene potiće da donose odluke koje odgovaraju provoditeljima tih operacija. Ove akcije provode se prema određenoj metodologiji. Na temelju istraživanja primjera informacijskih i specijalnih operacija moguće je odrediti metodologiju, tj. teorijski okvir prema kojem se one provode. U tu svrhu informacijske operacije i specijalne operacije teorijski su opisane kao i njihove temeljne građevne jedinice poput dezinformacije i spina. Detektirano je tko su nositelji provođenja informacijskih i specijalnih operacija te je opisana metodologija njihova provođenja.Information operations and the influence operations, perception management respectively (special operations) are actions of manipulation of information by which the person to whom they directed encourages to make decisions that are suitable to implementers of these operations. These actions are carried out according to a specific methodology. Based on research of exmples of information and special operations it is possible to determine the methodology, ie. the theoretical framework under which they are conducted. For this purpose, information operations and special operations are theoretically described and also their basic building blocks, such as disinformation and spin. It is detected who are the carriers of condiucting of information and special operation and also a methodology of conduction is also set

    Analog readout for optical reservoir computers

    Full text link
    Reservoir computing is a new, powerful and flexible machine learning technique that is easily implemented in hardware. Recently, by using a time-multiplexed architecture, hardware reservoir computers have reached performance comparable to digital implementations. Operating speeds allowing for real time information operation have been reached using optoelectronic systems. At present the main performance bottleneck is the readout layer which uses slow, digital postprocessing. We have designed an analog readout suitable for time-multiplexed optoelectronic reservoir computers, capable of working in real time. The readout has been built and tested experimentally on a standard benchmark task. Its performance is better than non-reservoir methods, with ample room for further improvement. The present work thereby overcomes one of the major limitations for the future development of hardware reservoir computers.Comment: to appear in NIPS 201

    A complexity analysis of statistical learning algorithms

    Full text link
    We apply information-based complexity analysis to support vector machine (SVM) algorithms, with the goal of a comprehensive continuous algorithmic analysis of such algorithms. This involves complexity measures in which some higher order operations (e.g., certain optimizations) are considered primitive for the purposes of measuring complexity. We consider classes of information operators and algorithms made up of scaled families, and investigate the utility of scaling the complexities to minimize error. We look at the division of statistical learning into information and algorithmic components, at the complexities of each, and at applications to support vector machine (SVM) and more general machine learning algorithms. We give applications to SVM algorithms graded into linear and higher order components, and give an example in biomedical informatics
    corecore