14 research outputs found
FeladatfĂĽggĹ‘ felĂ©pĂtĂ©sű többprocesszoros cĂ©lrendszerek szintĂ©zis algoritmusainak kutatása = Research of synthesis algorithms for special-purpose multiprocessing systems with task-dependent architecture
Ăšj mĂłdszert Ă©s egy keretrendszert fejlesztettĂĽnk ki olyan speciális többprocesszoros struktĂşra tervezĂ©sĂ©re, amely lehetĹ‘vĂ© teszi a pipeline működtetĂ©st akkor is, ha a feladat-leĂrásban nincs hatĂ©konyan kihasználhatĂł párhuzamosság. A szintĂ©zis egy magas szintű nyelven (C, Java, stb.) adott feladatleĂrásbĂłl indul ki. Ezután dekompozĂciĂłs algoritmus megfelelĹ‘ szegmenseket kĂ©pez a program alapján. A szegmensek kĂvánt száma, a szegmenseket megvalĂłsĂtĂł processzorok fĹ‘bb tulajdonságai Ă©s a becsĂĽlt kommunikáciĂłs idĹ‘igĂ©nyek megadhatĂłk bemeneti paramĂ©terekkĂ©nt. KedvezĹ‘ pipeline felĂ©pĂtĂ©s cĂ©ljábĂłl a pipeline adatfolyamok magas szintű szintĂ©zisĂ©nek (HLS) mĂłdszertanát alkalmaztuk. Ezek az eszközök az ĂĽtemezĂ©s Ă©s az allokáciĂł rĂ©vĂ©n kĂsĂ©rlik meg az optimalizálást a szegmensekbĹ‘l kĂ©pzett adatfolyam gráfon. EzĂ©rt a kiadĂłdĂł többprocesszoros felĂ©pĂtĂ©s nem egy uniformizált processzor-rács, hanem a megoldandĂł feladatra formált struktĂşra, Ăgy feladatfĂĽggĹ‘nek nevezhetĹ‘. A mĂłdszer modularitása lehetĹ‘vĂ© teszi a dekompozĂciĂłs algoritmusnak Ă©s a HLS eszköznek a cserĂ©jĂ©t, mĂłdosĂtását az alkalmazási igĂ©nyektĹ‘l fĂĽggĹ‘en. A mĂłdszer kiĂ©rtĂ©kelĂ©se cĂ©ljábĂłl olyan HLS eszközt alkalmaztunk, amely a kĂvánt pipeline ĂşjraindĂtási periĂłdust bemeneti adatkĂ©nt tudja kezelni, Ă©s processzorok között egy optimalizált idĹ‘osztásos, arbitráciĂł-mentes sĂnrendszert hoz lĂ©tre. Ebben a struktĂşrában a kommunikáciĂł szervezĂ©sĂ©hez nincs szĂĽksĂ©g kĂĽlön szoftver támogatásra, ha a processzorok kĂ©pesek közvetlen adatforgalomra. | A new method and a framework tool has been developed for designing a special multiprocessing structure for making the pipeline function possible as a special parallel processing, even if there is no efficiently exploitable parallelism in the task description. The synthesis starts from a task description written in a high level language (C, Java, etc). A decomposing algorithm generates proper segments of this program. The desired number of the segments, the main properties of the processor set implementing the segments and the estimated communication time-demand can be given as input parameters. For constructing a pipeline structure, the high-level synthesis (HLS) methodology of pipelined datapaths is applied. These tools attempt to optimize by scheduling and allocating the dataflow graph generated from the segments Thus, the resulted structure is not a uniform processor grid, but it is shaped depending on the task, i.e. it can be called task-dependent. The modularity of the method permits the decomposition algorithm and the HLS tool to be replaced by other ones depending on the requirements of the application. For evaluating the method, a specific HLS tool is applied, which can accept the desired pipeline restart time as input parameter, and generates an optimized time shared simple arbitration-free bus system between the processing units. Therefore, there is no need for extra efforts to organize the communication, if the processing units can transfer data directly
Location-aware online learning for top-k recommendation
We address the problem of recommending highly volatile items for users, both with potentially ambiguous location that may change in time. The three main ingredients of our method include (1) using online machine learning for the highly volatile items; (2) learning the personalized importance of hierarchical geolocation (for example, town, region, country, continent); finally (3) modeling temporal relevance by counting recent items with an exponential decay in recency.For (1), we consider a time-aware setting, where evaluation is cumbersome by traditional measures since we have different top recommendations at different times. We describe a time-aware framework based on individual item discounted gain. For (2), we observe that trends and geolocation turns out to be more important than personalized user preferences: user-item and content-item matrix factorization improves in combination with our geo-trend learning methods, but in itself, they are greatly inferior to our location based models. In fact, since our best performing methods are based on spatiotemporal data, they are applicable in the user cold start setting as well and perform even better than content based cold start methods. Finally for (3), we estimate the probability that the item will be viewed by its previous views to obtain a powerful model that combines item popularity and recency.To generate realistic data for measuring our new methods, we rely on Twitter messages with known GPS location and consider hashtags as items that we recommend the users to be included in their next message. © 2016 Elsevier B.V