561 research outputs found
Traceability and tracing of pharmaceutical distribution through Blockchain and Smart Contracts
[Abstract]: Pharmaceutical supply chains require a large number of actions and resources to track products
circulating there. However, the appearance of Blockchain implies a substantial advance
in identifying products since it adapts perfectly to the conditions imposed by the sector.
Therefore, this Bachelor’s thesis investigates technologies such as Blockchain to reinforce
the mechanisms and guarantee security in the traffic of products throughout the pharmaceutical
supply chain. The developed system allows users to interact with it, facilitating a
graphical interface with all the functionalities offered to add, update and view information
on both medicines and workers. In addition, this system offers great reliability when it comes
to ensuring the integrity of the information assigned to medicines, where data is accessible
throughout the supply chain, ensuring total transparency between members of the chain and
end users. Therefore, in terms of security, this system makes great strides to offer a solution
against falsifications in the supply of medicines and their subsequent sale.[Resumo]: As cadeas de suministro farmacéuticas requiren un gran número de accións e recursos
para poder realizar o seguimento dos produtos que por alí circulan. Sin embargo, a aparición
de Blockchain implica un avance substancial á hora de identificar os produtos que se amolda
perfectamente ás condicións impostas polo sector.
Por ello, este Traballo Fin de Grao investiga en tecnoloxías como Blockchain para reforzar
os mecanismos e garantir a seguridade no tráfico de produtos ao longo da cadea de suministro
farmacéutico. Desenvolveuse un sistema co que os usuarios poden interactuar facilitando
unha interface gráfica con todas as funcionalidades ofrecidas para engadir, actualizar e visualizar
información tanto de medicamentos como dos traballadores. Ademais, este sistema
ofrece unha gran fiabilidade á hora de garantir a integridade da información asignada aos
medicamentos, onde os datos son accesibles en toda a cadea de suministro, garantindo a total
transparencia entre os membros da cadea e os usuarios finais. Por iso, en materia de seguridade,
o sistema dá grandes avances para ofrecer unha solución contra as falsificacións na
subministración de medicamentos e a súa posterior venda.Traballo fin de grao (UDC.FIC). Enxeñaría Informática. Curso 2021/202
Blockchain for a Resilient, Efficient, and Effective Supply Chain, Evidence from Cases
In the modern acquisition, it is unrealistic to consider single entities as producing and delivering a product independently. Acquisitions usually take place through supply networks. Resiliency, efficiency, and effectiveness of supply networks directly contribute to the acquisition system\u27s resiliency, efficiency, and effectiveness. All the involved firms form a part of a supply network essential to producing the product or service. The decision-makers have to look for new methodologies for supply chain management. Blockchain technology introduces new methods of decentralization and delegation of services, which can transform supply chains and result in a more resilient, efficient, and effective supply chain. This research aims to review and analyze the selected current blockchain technology adoptions to enhance the resiliency of supply network management by facilitating collaboration and communication among suppliers and support the decision-making process. In the first part of this study, we discuss the limitations and challenges of the supply chain system that can be addressed by integrating blockchain technology. In the final part, we analyze multiple blockchain-based supply chain use cases to identify how the main features of blockchain are suited best for supply network management
Preemptive Thread Block Scheduling with Online Structural Runtime Prediction for Concurrent GPGPU Kernels
Recent NVIDIA Graphics Processing Units (GPUs) can execute multiple kernels
concurrently. On these GPUs, the thread block scheduler (TBS) uses the FIFO
policy to schedule their thread blocks. We show that FIFO leaves performance to
chance, resulting in significant loss of performance and fairness. To improve
performance and fairness, we propose use of the preemptive Shortest Remaining
Time First (SRTF) policy instead. Although SRTF requires an estimate of runtime
of GPU kernels, we show that such an estimate of the runtime can be easily
obtained using online profiling and exploiting a simple observation on GPU
kernels' grid structure. Specifically, we propose a novel Structural Runtime
Predictor. Using a simple Staircase model of GPU kernel execution, we show that
the runtime of a kernel can be predicted by profiling only the first few thread
blocks. We evaluate an online predictor based on this model on benchmarks from
ERCBench, and find that it can estimate the actual runtime reasonably well
after the execution of only a single thread block. Next, we design a thread
block scheduler that is both concurrent kernel-aware and uses this predictor.
We implement the SRTF policy and evaluate it on two-program workloads from
ERCBench. SRTF improves STP by 1.18x and ANTT by 2.25x over FIFO. When compared
to MPMax, a state-of-the-art resource allocation policy for concurrent kernels,
SRTF improves STP by 1.16x and ANTT by 1.3x. To improve fairness, we also
propose SRTF/Adaptive which controls resource usage of concurrently executing
kernels to maximize fairness. SRTF/Adaptive improves STP by 1.12x, ANTT by
2.23x and Fairness by 2.95x compared to FIFO. Overall, our implementation of
SRTF achieves system throughput to within 12.64% of Shortest Job First (SJF, an
oracle optimal scheduling policy), bridging 49% of the gap between FIFO and
SJF.Comment: 14 pages, full pre-review version of PACT 2014 poste
Standart-konformes Snapshotting für SystemC Virtuelle Plattformen
The steady increase in complexity of high-end embedded systems goes along with an increasingly complex design process.
We are currently still in a transition phase from Hardware-Description Language (HDL) based design towards virtual-platform-based design of embedded systems.
As design complexity rises faster than developer productivity a gap forms.
Restoring productivity while at the same time managing increased design complexity can also be achieved through focussing on the development of new tools and design methodologies.
In most application areas, high-level modelling languages such as SystemC are used in early design phases.
In modern software development Continuous Integration (CI) is used to automatically test if a submitted piece of code breaks functionality.
Application of the CI concept to embedded system design and testing requires fast build and test execution times from the virtual platform framework.
For this use case the ability to save a specific state of a virtual platform becomes necessary.
The saving and restoring of specific states of a simulation requires the ability to serialize all data structures within the simulation models.
Improving the frameworks and establishing better methods will only help to narrow the design gap, if these changes are introduced with the needs of the engineers and developers in mind.
Ultimately, it is their productivity that shall be improved.
The ability to save the state of a virtual platform enables developers to run longer test campaigns that can even contain randomized test stimuli.
If the saved states are modifiable the developers can inject faulty states into the simulation models.
This work contributes an extension to the SoCRocket virtual platform framework to enable snapshotting.
The snapshotting extension can be considered a reference implementation as the utilization of current SystemC/TLM standards makes it compatible to other frameworkds.
Furthermore, integrating the UVM SystemC library into the framework enables test driven development and fast validation of SystemC/TLM models using snapshots.
These extensions narrow the design gap by supporting designers, testers and developers to work more efficiently.Die stetige Steigerung der Komplexität eingebetteter Systeme geht einher mit einer ebenso steigenden Komplexität des Entwurfsprozesses.
Wir befinden uns momentan in der Übergangsphase vom Entwurf von eingebetteten Systemen basierend auf Hardware-Beschreibungssprachen hin zum Entwurf ebendieser basierend auf virtuellen Plattformen.
Da die Entwurfskomplexität rasanter steigt als die Produktivität der Entwickler, entsteht eine Kluft.
Die Produktivität wiederherzustellen und gleichzeitig die gesteigerte Entwurfskomplexität zu bewältigen, kann auch erreicht werden, indem der Fokus auf die Entwicklung neuer Werkzeuge und Entwurfsmethoden gelegt wird.
In den meisten Anwendungsgebieten werden Modellierungssprachen auf hoher Ebene, wie zum Beispiel SystemC, in den frühen Entwurfsphasen benutzt.
In der modernen Software-Entwicklung wird Continuous Integration (CI) benutzt um automatisiert zu überprüfen, ob eine eingespielte Änderung am Quelltext bestehende Funktionalitäten beeinträchtigt.
Die Anwendung des CI-Konzepts auf den Entwurf und das Testen von eingebetteten Systemen fordert schnelle Bau- und Test-Ausführungszeiten von dem genutzten Framework für virtuelle Plattformen.
Für diesen Anwendungsfall wird auch die Fähigkeit, einen bestimmten Zustand der virtuellen Plattform zu speichern, erforderlich.
Das Speichern und Wiederherstellen der Zustände einer Simulation erfordert die Serialisierung aller Datenstrukturen, die sich in den Simulationsmodellen befinden.
Das Verbessern von Frameworks und Etablieren besserer Methodiken hilft nur die Entwurfs-Kluft zu verringern, wenn diese Änderungen mit Berücksichtigung der Bedürfnisse der Entwickler und Ingenieure eingeführt werden.
Letztendlich ist es ihre Produktivität, die gesteigert werden soll.
Die Fähigkeit den Zustand einer virtuellen Plattform zu speichern, ermöglicht es den Entwicklern, längere Testkampagnen laufen zu lassen, die auch zufällig erzeugte Teststimuli beinhalten können oder, falls die gespeicherten Zustände modifizierbar sind, fehlerbehaftete Zustände in die Simulationsmodelle zu injizieren.
Mein mit dieser Arbeit geleisteter Beitrag beinhaltet die Erweiterung des SoCRocket Frameworks um Checkpointing Funktionalität im Sinne einer Referenzimplementierung.
Weiterhin ermöglicht die Integration der UVM SystemC Bibliothek in das Framework die Umsetzung der testgetriebenen Entwicklung und schnelle Validierung von SystemC/TLM Modellen mit Hilfe von Snapshots
Probabilistic Graphical Models on Multi-Core CPUs using Java 8
In this paper, we discuss software design issues related to the development
of parallel computational intelligence algorithms on multi-core CPUs, using the
new Java 8 functional programming features. In particular, we focus on
probabilistic graphical models (PGMs) and present the parallelisation of a
collection of algorithms that deal with inference and learning of PGMs from
data. Namely, maximum likelihood estimation, importance sampling, and greedy
search for solving combinatorial optimisation problems. Through these concrete
examples, we tackle the problem of defining efficient data structures for PGMs
and parallel processing of same-size batches of data sets using Java 8
features. We also provide straightforward techniques to code parallel
algorithms that seamlessly exploit multi-core processors. The experimental
analysis, carried out using our open source AMIDST (Analysis of MassIve Data
STreams) Java toolbox, shows the merits of the proposed solutions.Comment: Pre-print version of the paper presented in the special issue on
Computational Intelligence Software at IEEE Computational Intelligence
Magazine journa
Invariant preservation in geo-replicated data stores
The Internet has enabled people from all around the globe to communicate with each
other in a matter of milliseconds. This possibility has a great impact in the way we work,
behave and communicate, while the full extent of possibilities are yet to be known. As we become more dependent of Internet services, the more important is to ensure that these systems operate correctly, with low latency and high availability for millions of clients scattered all around the globe.
To be able to provide service to a large number of clients, and low access latency
for clients in different geographical locations, Internet services typically rely on georeplicated storage systems. Replication comes with costs that may affect service quality.
To propagate updates between replicas, systems either choose to lose consistency in favor of better availability and latency (weak consistency), or maintain consistency, but the system might become unavailable during partitioning (strong consistency).
In practice, many production systems rely on weak consistency storage systems to
enhance user experience, overlooking that applications can become incorrect due to the weaker consistency assumptions. In this thesis, we study how to exploit application’s
semantics to build correct applications without affecting the availability and latency of
operations.
We propose a new consistency model that breaks apart from traditional knowledge
that applications consistency is dependent on coordinating the execution of operations
across replicas. We show that it is possible to execute most operations with low latency
and in an highly available way, while preserving application’s correctness. Our approach consists in specifying the fundamental properties that define the correctness of applications, i.e. the application invariants, and identify and prevent concurrent executions that potentially can make the state of the database inconsistent, i.e. that may violate some invariant. We explore different, complementary, approaches to implement this model.
The Indigo approach consists in preventing conflicting operations from executing
concurrently, by restricting the operations that each replica can execute at each moment to maintain application’s correctness.
The IPA approach does not preclude the execution of any operation, ensuring high
availability. To maintain application correctness, operations are modified to prevent
invariant violations during replica reconciliation, or, if modifying operations provides an unsatisfactory semantics, it is possible to correct any invariant violations before a client
can read an inconsistent state, by executing compensations.
Evaluation shows that our approaches can ensure both low latency and high availability
for most operations in common Internet application workloads, with small execution
overhead in comparison to unmodified weak consistency systems, while enforcing application invariants, as in strong consistency systems
Efficient Scaling of Out-of-Order Processor Resources
Rather than improving single-threaded performance, with the dawn of the multi-core era, processor microarchitects have exploited Moore's law transistor scaling by increasing core density on a chip and increasing the number of thread contexts within a core. However, single-thread performance and efficiency is still very relevant in the power-constrained multi-core era, as increasing core counts do not yield corresponding performance improvements under real thermal and thread-level constraints. This dissertation provides a detailed study of register reference count structures and its application to both conventional and non-conventional, latency tolerant, out-of-order processors. Prior work has incorporated reference counting, but without a detailed implementation or energy model. This dissertation presents a working implementation of reference count structures and shows the overheads are low and can be recouped by the techniques enabled in high-performance out-of-order processors. A study of register allocation algorithms exploits register file occupancy to reduce power consumption by dynamically resizing the register file, which is especially important in the face of wider multi-threaded processors who require larger register files. Latency tolerance has been introduced as a technique to improve single threaded performance by removing cache-miss dependent instructions from the execution pipeline until the miss returns. This dissertation introduces a microarchitecture with a predictive approach to identify long-latency loads, and reduce the energy cost and overhead of scaling the instruction window inherent in latency tolerant microarchitectures. The key features include a front-end predictive slice-out mechanism and in-order queue structure along with mechanisms to reduce the energy cost and register-file usage of executing instructions. Cycle-level simulation shows improved performance and reduced energy delay for memory-bound workloads. Both techniques scale processor resources, addressing register file inefficiency and the allocation of processor resources to instructions during low ILP regions.Ph.D., Computer Engineering -- Drexel University, 201
- …