33 research outputs found
Методы управления work-stealing деками в динамических планировщиках многопроцессорных параллельных вычислений
In parallel task schedulers, which are using the work-stealing strategy, each processor has own task deque. One end of the deque is used for insertion and deletion of tasks only by the owner, and the other is used for stealing of tasks by other processors. The article offers an overview of work-stealing deque’s description of the deque’s optimal management problems, which our team had solved for the work-stealing strategy. The idea of the algorithm for deque’s managing in two-level memory is that if the memory allocated to the deques becomes overflow, elements are redistributed between memory levels. Elements from the deque’s ends are stored in fast memory, since they will be worked with in the near time, and elements from the deque’s middle part are stored in slow memory. In this case, it is necessary to determine the required number of elements that need to be left in fast memory, depending on the optimal criteria and system parameters.В параллельных планировщиках задач, работающих по стратегии work-stealing, каждый процессор имеет свой дек задач. Один конец дека используется для добавления и извлечения задач только владельцем, а другой — для перехвата задач другими процессорами. В статье предлагается обзор методов управления work-stealing деками, которые используются при реализации work-stealing планировщиков параллельных задач, а также представлено описание поставленных и решенных нашим коллективом задач оптимального управления деками для стратегии work-stealing. Принцип алгоритмов оптимального управления деками в двухуровневой памяти заключается в том, что при переполнении выделенного участка быстрой памяти происходит перераспределение элементов (задач) дека между уровнями памяти. В быстрой памяти остаются элементы из концов дека, так как с ними будет происходить работа в ближайшее время, а элементы средней части дека хранятся в медленной памяти. В таком случае необходимо определить оптимальное количество элементов, которое нужно оставить в быстрой памяти, в зависимости от критерия оптимальности и параметров системы
Secure-GLOR: An adaptive secure routing protocol for dynamic wireless mesh networks
© 2017 IEEE. With the dawn of a new era, digital security has become one of the most essential part of any network. Be it a physical network, virtual network or social network, the demand for secure data transmission is ever increasing. Wireless mesh networks also stand the same test of security as the legacy networks. This paper presents a secure version of the Geo-Location Oriented Routing (GLOR) protocol for wireless mesh networks, incorporating a multilevel security framework. It implements authentication using the new features of the network model and enables encryption throughout the network to provide high levels of security
Evaluation of the role of smart city technologies to combat COVID-19 pandemic
This is the accepted manuscript of a conference paper delivered at Recovering from COVID: Responsible Management and Reshaping the Economy, 35th British Academy of Management Conference, the 31st August - 3rd September, Lancaster University Management School, United Kingdom.Shetty, N., Renukappa, S., Suresh, S. & Algahtan, K. (2021) Evaluation of the role of smart city technologies to combat COVID-19 pandemic, presented at Recovering from COVID: Responsible Management and Reshaping the Economy, 35th British Academy of Management Conference, the 31st August - 3rd September, Lancaster University Management School, United Kingdom
Recommended from our members
Survey of storage systems for high-performance computing
In current supercomputers, storage is typically provided by parallel distributed file systems for hot data and tape archives for cold data. These file systems are often compatible with local file systems due to their use of the POSIX interface and semantics, which eases development and debugging because applications can easily run both on workstations and supercomputers. There is a wide variety of file systems to choose from, each tuned for different use cases and implementing different optimizations. However, the overall application performance is often held back by I/O bottlenecks due to insufficient performance of file systems or I/O libraries for highly parallel workloads. Performance problems are dealt with using novel storage hardware technologies as well as alternative I/O semantics and interfaces. These approaches have to be integrated into the storage stack seamlessly to make them convenient to use. Upcoming storage systems abandon the traditional POSIX interface and semantics in favor of alternative concepts such as object and key-value storage; moreover, they heavily rely on technologies such as NVM and burst buffers to improve performance. Additional tiers of storage hardware will increase the importance of hierarchical storage management. Many of these changes will be disruptive and require application developers to rethink their approaches to data management and I/O. A thorough understanding of today's storage infrastructures, including their strengths and weaknesses, is crucially important for designing and implementing scalable storage systems suitable for demands of exascale computing
Blockchain in the built environment: analysing current applications and developing an emergent framework
Distributed ledger technology (DLT), commonly referred to as ‘blockchain’ and originally invented to create a peer-to-peer digital currency, is rapidly attracting interest in other sectors. The aim in this paper is (1) to investigate the applications of DLT within the built environment, and the challenges and opportunities facing its adoption; and (2) develop a multi-dimensional emergent framework for DLT adoption within the construction sector.
Key areas of DLT applications were found in: smart energy; smart cities and the sharing economy; smart government; smart homes; intelligent transport; Building Information Modelling (BIM) and construction management; and business models and organisational structures. The results showed a significant concentration of DLT research on the operation phase of assets. This is expected given the significant resources and lifespan associated with the operation phase of assets and their social, environmental and economic impact. However, more attention is required to address the current gap at the design and construction phases to ensure that these phases are not treated in isolation from the operational phase.
An emergent framework combining the political, social and technical dimensions was developed. The framework was overlaid with an extensive set of challenges and opportunities. The structured and inter-connected dimensions provided by the framework can be used by field researchers as a point of departure to investigate a range of research questions from political, social or technical perspectives
A Systematic Mapping Study of Cloud Resources Management and Scalability in Brokering, Scheduling, Capacity Planning and Elasticity
Cloud computing allows for resource management through various means. Some of these include brokering, scheduling, elasticity and
capacity planning and these processes helps in facilitating service utilization. Determining a particular research area especially in terms
of resources management and scalability in the cloud is usually a cumbersome process for a researcher, hence the need for reviews and
paper surveys in identifying potential research gaps. The objective of this work was to carry out a systematic mapping study of resources
management and scalability in the cloud. A systematic mapping study offers a summarized overview of studies that have been carried
out in a particular area of interest. It then presents the results of such overviews graphically using a map. Although, the systematic
mapping process requires less effort, the results are more coarse-grained. In this study, analysis of publications were done based on their
topics, research type and contribution facets. These publications were on research works which focused on resource management,
scheduling, capacity planning, scalability and elasticity. This study classified publications into research facets viz., evaluation, validation,
solution, philosophical, option and experience and contribution facets based on metrics, tools, processes, models and methods used.
Obtained results showed that 31.3% of the considered publications focused on evaluation based research, 19.85% on validation and 32%
on processes. About 2.4% focused on metric for capacity planning, 5.6% focused on tools relating to resource management, while 5.6
and 8% of the publications were on model for capacity planning and scheduling method, respectively. Research works focusing on
validating capacity planning and elasticity were the least at 2.29 and 0.76%, respectively. This study clearly identified gaps in the field of
resources management and scalability in the cloud which should stimulate interest for further studies by both researchers and industry
practitioners
A Review of Blockchain-Based Systems in Transportation
This paper presents a literature review about the application of blockchain-based systems in transportation. The main aim was to identify, through the implementation of a multi-step methodology: current research-trends, main gaps in the literature, and possible future challenges. First, a bibliometric analysis was carried out to obtain a broad overview of the topic of interest. Subsequently, the most influential contributions were analysed in depth, with reference to the following two areas: supply chain and logistics; road traffic management and smart cities. The most important result is that the blockchain technology is still in an early stage, but appears extremely promising, given its possible applications within multiple fields, such as food track and trace, regulatory compliance, smart vehicles' security, and supply-demand matching. Much effort is still necessary for reaching the maturation stage because several models have been theorized in recent years, but very few have been implemented within real contexts. Moreover, the link blockchain-sustainability was explored, showing that this technology could be the trigger for limiting food waste, reducing exhaust gas emissions, favouring correct urban development, and, in general, improving quality of life
FIN-DM: finantsteenuste andmekaeve protsessi mudel
Andmekaeve hõlmab reeglite kogumit, protsesse ja algoritme, mis võimaldavad ettevõtetel iga päev kogutud andmetest rakendatavaid teadmisi ammutades suurendada tulusid, vähendada kulusid, optimeerida tooteid ja kliendisuhteid ning saavutada teisi eesmärke. Andmekaeves ja -analüütikas on vaja hästi määratletud metoodikat ja protsesse. Saadaval on mitu andmekaeve ja -analüütika standardset protsessimudelit. Kõige märkimisväärsem ja laialdaselt kasutusele võetud standardmudel on CRISP-DM. Tegu on tegevusalast sõltumatu protsessimudeliga, mida kohandatakse sageli sektorite erinõuetega. CRISP-DMi tegevusalast lähtuvaid kohandusi on pakutud mitmes valdkonnas, kaasa arvatud meditsiini-, haridus-, tööstus-, tarkvaraarendus- ja logistikavaldkonnas. Seni pole aga mudelit kohandatud finantsteenuste sektoris, millel on omad valdkonnapõhised erinõuded.
Doktoritöös käsitletakse seda lünka finantsteenuste sektoripõhise andmekaeveprotsessi (FIN-DM) kavandamise, arendamise ja hindamise kaudu. Samuti uuritakse, kuidas kasutatakse andmekaeve standardprotsesse eri tegevussektorites ja finantsteenustes. Uurimise käigus tuvastati mitu tavapärase raamistiku kohandamise stsenaariumit. Lisaks ilmnes, et need meetodid ei keskendu piisavalt sellele, kuidas muuta andmekaevemudelid tarkvaratoodeteks, mida saab integreerida organisatsioonide IT-arhitektuuri ja äriprotsessi. Peamised finantsteenuste valdkonnas tuvastatud kohandamisstsenaariumid olid seotud andmekaeve tehnoloogiakesksete (skaleeritavus), ärikesksete (tegutsemisvõime) ja inimkesksete (diskrimineeriva mõju leevendus) aspektidega. Seejärel korraldati tegelikus finantsteenuste organisatsioonis juhtumiuuring, mis paljastas 18 tajutavat puudujääki CRISP- DMi protsessis.
Uuringu andmete ja tulemuste abil esitatakse doktoritöös finantsvaldkonnale kohandatud CRISP-DM nimega FIN-DM ehk finantssektori andmekaeve protsess (Financial Industry Process for Data Mining). FIN-DM laiendab CRISP-DMi nii, et see toetab privaatsust säilitavat andmekaevet, ohjab tehisintellekti eetilisi ohte, täidab riskijuhtimisnõudeid ja hõlmab kvaliteedi tagamist kui osa andmekaeve elutsüklisData mining is a set of rules, processes, and algorithms that allow companies to increase revenues, reduce costs, optimize products and customer relationships, and achieve other business goals, by extracting actionable insights from the data they collect on a day-to-day basis. Data mining and analytics projects require well-defined methodology and processes. Several standard process models for conducting data mining and analytics projects are available. Among them, the most notable and widely adopted standard model is CRISP-DM. It is industry-agnostic and often is adapted to meet sector-specific requirements. Industry- specific adaptations of CRISP-DM have been proposed across several domains, including healthcare, education, industrial and software engineering, logistics, etc. However, until now, there is no existing adaptation of CRISP-DM for the financial services industry, which has its own set of domain-specific requirements.
This PhD Thesis addresses this gap by designing, developing, and evaluating a sector-specific data mining process for financial services (FIN-DM). The PhD thesis investigates how standard data mining processes are used across various industry sectors and in financial services. The examination identified number of adaptations scenarios of traditional frameworks. It also suggested that these approaches do not pay sufficient attention to turning data mining models into software products integrated into the organizations' IT architectures and business processes. In the financial services domain, the main discovered adaptation scenarios concerned technology-centric aspects (scalability), business-centric aspects (actionability), and human-centric aspects (mitigating discriminatory effects) of data mining. Next, an examination by means of a case study in the actual financial services organization revealed 18 perceived gaps in the CRISP-DM process.
Using the data and results from these studies, the PhD thesis outlines an adaptation of
CRISP-DM for the financial sector, named the Financial Industry Process for Data Mining
(FIN-DM). FIN-DM extends CRISP-DM to support privacy-compliant data mining, to tackle AI ethics risks, to fulfill risk management requirements, and to embed quality assurance as part of the data mining life-cyclehttps://www.ester.ee/record=b547227
Reliable Collaborative Filtering on Spatio-Temporal Privacy Data
Lots of multilayer information, such as the spatio-temporal privacy check-in data, is accumulated in the location-based social network (LBSN). When using the collaborative filtering algorithm for LBSN location recommendation, one of the core issues is how to improve recommendation performance by combining the traditional algorithm with the multilayer information. The existing approaches of collaborative filtering use only the sparse user-item rating matrix. It entails high computational complexity and inaccurate results. A novel collaborative filtering-based location recommendation algorithm called LGP-CF, which takes spatio-temporal privacy information into account, is proposed in this paper. By mining the users check-in behavior pattern, the dataset is segmented semantically to reduce the data size that needs to be computed. Then the clustering algorithm is used to obtain and narrow the set of similar users. User-location bipartite graph is modeled using the filtered similar user set. Then LGP-CF can quickly locate the location and trajectory of users through message propagation and aggregation over the graph. Through calculating users similarity by spatio-temporal privacy data on the graph, we can finally calculate the rating of recommendable locations. Experiments results on the physical clusters indicate that compared with the existing algorithms, the proposed LGP-CF algorithm can make recommendations more accurately