264 research outputs found
Object-Oriented Implementation of the NAS Parallel Benchmarks using Charm++
This report describes experiences with implementing the NAS Computational Fluid Dynamics benchmarks using a parallel object-oriented language, Charm++. Our main objective in implementing the NAS CFD kernel benchmarks was to develop a code that could be used to easily experiment with different domain decomposition strategies and dynamic load balancing. We also wished to leverage the object-orientation provided by the Charm++ parallel object-oriented language, to develop reusable abstractions that would simplify the process of developing parallel applications. We first describe the Charm++ parallel programming model and the parallel object array abstraction, then go into detail about each of the Scalar Pentadiagonal (SP) and Lower/Upper Triangular (LU) benchmarks, along with performance results. Finally we conclude with an evaluation of the methodology used
Cardiac cell modelling: Observations from the heart of the cardiac physiome project
In this manuscript we review the state of cardiac cell modelling in the context of international initiatives such as the IUPS Physiome and Virtual Physiological Human Projects, which aim to integrate computational models across scales and physics. In particular we focus on the relationship between experimental data and model parameterisation across a range of model types and cellular physiological systems. Finally, in the context of parameter identification and model reuse within the Cardiac Physiome, we suggest some future priority areas for this field
Abstract Hidden Markov Models: a monadic account of quantitative information flow
Hidden Markov Models, HMM's, are mathematical models of Markov processes with
state that is hidden, but from which information can leak. They are typically
represented as 3-way joint-probability distributions.
We use HMM's as denotations of probabilistic hidden-state sequential
programs: for that, we recast them as `abstract' HMM's, computations in the
Giry monad , and we equip them with a partial order of increasing
security. However to encode the monadic type with hiding over some state
we use rather
than the conventional that suffices for
Markov models whose state is not hidden. We illustrate the
construction with a small
Haskell prototype.
We then present uncertainty measures as a generalisation of the extant
diversity of probabilistic entropies, with characteristic analytic properties
for them, and show how the new entropies interact with the order of increasing
security. Furthermore, we give a `backwards' uncertainty-transformer semantics
for HMM's that is dual to the `forwards' abstract HMM's - it is an analogue of
the duality between forwards, relational semantics and backwards,
predicate-transformer semantics for imperative programs with demonic choice.
Finally, we argue that, from this new denotational-semantic viewpoint, one
can see that the Dalenius desideratum for statistical databases is actually an
issue in compositionality. We propose a means for taking it into account
Strategic Planning Customer Experience using Predictive Analysis Indihome PT Telkom
Telkom transformation paradigm by Innovating and disrupting the fast-changing industry is the main challenge right now. There is external challenging such as changing customer behavior and high performing Customer Experience (CX) focused industries that seeing customer and business growth. In PT Telkom itself, internal challenges there is a financial performance on a decreasing trend and for Net Promoter Score (NPS) is below best in class benchmark globally. The objective is elevating CX as a corporate strategic priority, engaging all stakeholders to achieve CX transformation. PT Telkom every year doing Net Promoter Score and Net Emotional Value (NEV) to measuring the loyalty of a firm's customer relationships and customer satisfaction that correlated with revenue growth. The author takes the model from secondary data from NPS and NEV Report from 2014 – 2018 of PT Telkom Indonesia, focused in Telkom Regional 3 West Java and assign relationship and satisfaction dimension and attribute. Using predictive analytics is a method that analyzing current and historical facts to make predictions about the future to determine an accurate strategy and can improve customer experience based on the appropriate level of correlation. From NPS and NEV report the dimensions and attributes are processed by predictive analysis using correlation and regression. The fact-finding that based on that analytics will be a strong correlation attributes that would become the key to strategic planning for a customer experience that in a lining with corporate strategy. The strong correlation attribute from this statistical processing is the installation process is long, friendliness of the technician, and ease of accessing points. Then the strategic planning for customer experience is from the result from predictive analytics combine with benchmarking with other telco’s company to propose strategic program end-to-end customer journey and integrated to back end system, digitalization and digital ecosystem will be impacted and gave a result business and revenue
A Structured Design Methodology for High Performance VLSI Arrays
abstract: The geometric growth in the integrated circuit technology due to transistor scaling also with system-on-chip design strategy, the complexity of the integrated circuit has increased manifold. Short time to market with high reliability and performance is one of the most competitive challenges. Both custom and ASIC design methodologies have evolved over the time to cope with this but the high manual labor in custom and statistic design in ASIC are still causes of concern. This work proposes a new circuit design strategy that focuses mostly on arrayed structures like TLB, RF, Cache, IPCAM etc. that reduces the manual effort to a great extent and also makes the design regular, repetitive still achieving high performance. The method proposes making the complete design custom schematic but using the standard cells. This requires adding some custom cells to the already exhaustive library to optimize the design for performance. Once schematic is finalized, the designer places these standard cells in a spreadsheet, placing closely the cells in the critical paths. A Perl script then generates Cadence Encounter compatible placement file. The design is then routed in Encounter. Since designer is the best judge of the circuit architecture, placement by the designer will allow achieve most optimal design. Several designs like IPCAM, issue logic, TLB, RF and Cache designs were carried out and the performance were compared against the fully custom and ASIC flow. The TLB, RF and Cache were the part of the HEMES microprocessor.Dissertation/ThesisPh.D. Electrical Engineering 201
The art of fault-tolerant system reliability modeling
A step-by-step tutorial of the methods and tools used for the reliability analysis of fault-tolerant systems is presented. Emphasis is on the representation of architectural features in mathematical models. Details of the mathematical solution of complex reliability models are not presented. Instead the use of several recently developed computer programs--SURE, ASSIST, STEM, PAWS--which automate the generation and solution of these models is described
- …