3,683 research outputs found

    木を用いた構造化並列プログラミング

    Get PDF
    High-level abstractions for parallel programming are still immature. Computations on complicated data structures such as pointer structures are considered as irregular algorithms. General graph structures, which irregular algorithms generally deal with, are difficult to divide and conquer. Because the divide-and-conquer paradigm is essential for load balancing in parallel algorithms and a key to parallel programming, general graphs are reasonably difficult. However, trees lead to divide-and-conquer computations by definition and are sufficiently general and powerful as a tool of programming. We therefore deal with abstractions of tree-based computations. Our study has started from Matsuzaki’s work on tree skeletons. We have improved the usability of tree skeletons by enriching their implementation aspect. Specifically, we have dealt with two issues. We first have implemented the loose coupling between skeletons and data structures and developed a flexible tree skeleton library. We secondly have implemented a parallelizer that transforms sequential recursive functions in C into parallel programs that use tree skeletons implicitly. This parallelizer hides the complicated API of tree skeletons and makes programmers to use tree skeletons with no burden. Unfortunately, the practicality of tree skeletons, however, has not been improved. On the basis of the observations from the practice of tree skeletons, we deal with two application domains: program analysis and neighborhood computation. In the domain of program analysis, compilers treat input programs as control-flow graphs (CFGs) and perform analysis on CFGs. Program analysis is therefore difficult to divide and conquer. To resolve this problem, we have developed divide-and-conquer methods for program analysis in a syntax-directed manner on the basis of Rosen’s high-level approach. Specifically, we have dealt with data-flow analysis based on Tarjan’s formalization and value-graph construction based on a functional formalization. In the domain of neighborhood computations, a primary issue is locality. A naive parallel neighborhood computation without locality enhancement causes a lot of cache misses. The divide-and-conquer paradigm is known to be useful also for locality enhancement. We therefore have applied algebraic formalizations and a tree-segmenting technique derived from tree skeletons to the locality enhancement of neighborhood computations.電気通信大学201

    Beyond Reuse Distance Analysis: Dynamic Analysis for Characterization of Data Locality Potential

    Get PDF
    Emerging computer architectures will feature drastically decreased flops/byte (ratio of peak processing rate to memory bandwidth) as highlighted by recent studies on Exascale architectural trends. Further, flops are getting cheaper while the energy cost of data movement is increasingly dominant. The understanding and characterization of data locality properties of computations is critical in order to guide efforts to enhance data locality. Reuse distance analysis of memory address traces is a valuable tool to perform data locality characterization of programs. A single reuse distance analysis can be used to estimate the number of cache misses in a fully associative LRU cache of any size, thereby providing estimates on the minimum bandwidth requirements at different levels of the memory hierarchy to avoid being bandwidth bound. However, such an analysis only holds for the particular execution order that produced the trace. It cannot estimate potential improvement in data locality through dependence preserving transformations that change the execution schedule of the operations in the computation. In this article, we develop a novel dynamic analysis approach to characterize the inherent locality properties of a computation and thereby assess the potential for data locality enhancement via dependence preserving transformations. The execution trace of a code is analyzed to extract a computational directed acyclic graph (CDAG) of the data dependences. The CDAG is then partitioned into convex subsets, and the convex partitioning is used to reorder the operations in the execution trace to enhance data locality. The approach enables us to go beyond reuse distance analysis of a single specific order of execution of the operations of a computation in characterization of its data locality properties. It can serve a valuable role in identifying promising code regions for manual transformation, as well as assessing the effectiveness of compiler transformations for data locality enhancement. We demonstrate the effectiveness of the approach using a number of benchmarks, including case studies where the potential shown by the analysis is exploited to achieve lower data movement costs and better performance.Comment: Transaction on Architecture and Code Optimization (2014

    A secure data outsourcing scheme based on Asmuth – Bloom secret sharing

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Data outsourcing is an emerging paradigm for data management in which a database is provided as a service by third-party service providers. One of the major benefits of offering database as a service is to provide organisations, which are unable to purchase expensive hardware and software to host their databases, with efficient data storage accessible online at a cheap rate. Despite that, several issues of data confidentiality, integrity, availability and efficient indexing of users’ queries at the server side have to be addressed in the data outsourcing paradigm. Service providers have to guarantee that their clients’ data are secured against internal (insider) and external attacks. This paper briefly analyses the existing indexing schemes in data outsourcing and highlights their advantages and disadvantages. Then, this paper proposes a secure data outsourcing scheme based on Asmuth–Bloom secret sharing which tries to address the issues in data outsourcing such as data confidentiality, availability and order preservation for efficient indexing

    The future of branch cash holdings management is here: New Markov chains

    Get PDF
    Liquidity management is one of the main concerns of the banking sector since it provides control in key areas such as treasury management, working capital financing and business valuation. Under the assumption that branch efficiency makes a fundamental contribution towards the effective performance of the global banking institution, this paper provides a new methodology (Markov Chains by blocks) in order to achieve knowledge on the branch cash holdings: conditions which ensure optimal cash holdings, recurring properties which help to better predict cash holdings shifts and the study of the branch cash holdings steady-states using Ergodic Theory. These findings will let bank managers know the time validity of the current cash holdings. This is a crucial advantage to ensure efficient cash management: while helping keep banking institutions on sound financial footing by guaranteeing the compulsory-by-law safety cushion, it also allows bank managers to make sound decisions upon fund investments

    implications for conservation planning in a Neotropical Hotspot

    Get PDF
    Covre, A. C., Lourenço-De-Moraes, R., Campos, F. S., & Benedito, E. (2022). Spatial relationships between fishes and amphibians: implications for conservation planning in a Neotropical Hotspot. Environmental Management, 70(6), 978–989. https://doi.org/10.21203/rs.3.rs-1479895/v1, https://doi.org/10.1007/s00267-022-01707-7 ---- This work received financial support from CAPES (Finance Code 001), CNPq (151473/2018-8), FCT (PTDC/CTA-AMB/28438/2017), and MagIC/NOVA IMS (UIDB/04152/2020).Species distribution patterns are widely used to guide conservation planning and are a central issue in ecology. The usefulness of spatial correlation analysis has been highlighted in several ecological applications so far. However, spatial assumptions in ecology are highly scale-dependent, in which geographical relationships between species diversity and distributions can have different conservation concerns. Here, an integrative landscape planning was designed to show the spatial distribution patterns of taxonomic and functional diversity of amphibians and fishes, from multiple species traits regarding morphology, life history, and behavior. We used spatial, morphological, and ecological data of amphibians and fishes to calculate the functional diversity and the spatial correlation of species. Mapping results show that the higher taxonomic and functional diversity of fishes is concentrated in the West Atlantic Forest. Considering amphibians, are located in the East portion of the biome. The spatial correlation of species indicates the regions of the Serra do Mar and the extreme southern part of the Central Corridor as the main overlapped species distribution areas between both groups. New key conservation sites were reported within the Brazilian Atlantic Forest hotspot, revealing cross-taxon mismatches between terrestrial and freshwater ecosystems. This study offers useful spatial information integrating suitable habitats of fishes and amphibians to complement existing and future research based on terrestrial and freshwater conservation. New priorities for biodiversity conservation in rich-species regions highlight the importance of spatial pattern analysis to support land-use planning in a macroecological context.authorsversionpublishe

    PharOS, a multicore OS ready for safety-related automotive systems: results and future prospects

    Get PDF
    International audienceAutomotive electrical/electronic architectures need to perform more and more functions that are mapped onto many different electronic control units (ECU) because of their different safety levels or different application domains (body, powertrain, multimedia, etc.). Freedom of interference is required to comply with the upcoming ISO 26262 standard for mixing different ASIL levels on the same ECU and is also required to cope with the safe integration of software from different suppliers. PharOS provides dedicated software partitioning mechanisms as well as controlled and efficient resource sharing by construction, from the design to the implementation stages. The main features of PharOS, contributing to this property, are presented in this paper as well as the results on its application an industry-driven case study and associated future prospects
    corecore