5,242 research outputs found

    Combined Time and Information Redundancy for SEU-Tolerance in Energy-Efficient Real-Time Systems

    No full text
    Recently the trade-off between energy consumption and fault-tolerance in real-time systems has been highlighted. These works have focused on dynamic voltage scaling (DVS) to reduce dynamic energy dissipation and on time redundancy to achieve transient-fault tolerance. While the time redundancy technique exploits the available slack time to increase the fault-tolerance by performing recovery executions, DVS exploits slack time to save energy. Therefore we believe there is a resource conflict between the time-redundancy technique and DVS. The first aim of this paper is to propose the usage of information redundancy to solve this problem. We demonstrate through analytical and experimental studies that it is possible to achieve both higher transient fault-tolerance (tolerance to single event upsets (SEU)) and less energy using a combination of information and time redundancy when compared with using time redundancy alone. The second aim of this paper is to analyze the interplay of transient-fault tolerance (SEU-tolerance) and adaptive body biasing (ABB) used to reduce static leakage energy, which has not been addressed in previous studies. We show that the same technique (i.e. the combination of time and information redundancy) is applicable to ABB-enabled systems and provides more advantages than time redundancy alone

    Security Risk Management - Approaches and Methodology

    Get PDF
    In today’s economic context, organizations are looking for ways to improve their business, to keep head of the competition and grow revenue. To stay competitive and consolidate their position on the market, the companies must use all the information they have and process their information for better support of their missions. For this reason managers have to take into consideration risks that can affect the organization and they have to minimize their impact on the organization. Risk management helps managers to better control the business practices and improve the business process.Risk Management, Security, Methodology

    Risk Governance in Bulgarian Dairy Farming

    Get PDF
    This paper identifies and assesses the efficiency of major modes for risk governance in Bulgarian dairy farming. Firstly, New Institutional Economics is incorporated and framework for analyzing governance of natural, market, private, and social (institutional) risks presented. Next, major types of risks faced by the dairy farms are specified and dominant market, private, public and hybrid modes of risk governance assessed. Finally, principal forms of risks caused by the dairy farms are identified, and efficiency of governing structure assessed. The development of Bulgarian dairy farming has been associated with quite specific risk structures facing by and causing from this sector. The huge market and institutional instability and uncertainty, and the high transaction costs, blocked evolution of effective market and collective modes for risk protection. A variety of private modes (internal organization, vertical integration, interlinking) emerged to deal with the significant natural, market, private, and social risks faced by the dairy farms and the other affected agents. Nevertheless, diverse risks associated with the dairy farming have not been effectively governed and persist during transition now. That is a consequence of ineffective public (Government, international assistance) intervention to correct market and private sector failures in the risk governance. The later have had considerable negative impacts on the evolution of farms, development of markets, structure of production and consumption, state of environment etc. Certain risks related to the dairy sector “disappeared” due to the lack of effective risk governance and the declining dairy farming. That would lead to further deformation in development of the dairy and related sectors unless effective public measures are taken to mitigate existing problems and risks.risk management, dairy, Bulgaria, Livestock Production/Industries, Risk and Uncertainty,

    Risk management in manufacturing systems: case study

    Get PDF
    Dissertação de mestrado em Engenharia Electrónica Industrial e ComputadoresUma vez que vivemos num mundo competitivo é necessário progredir lado a lado com a evolução crescente do mercado, assim sendo, as organizações precisam de ser cada vez mais flexíveis e ter uma forte capacidade de adaptação face às mudanças. Assim sendo, observa-se por parte das empresas o interesse em realizar-se uma análise aos fatores internos e externos que poderão comprometer a continuidade do negócio. Às exigências internas juntam-se as dos seus parceiros e clientes, de forma a garantir o cumprimento dos requisitos, pedem às empresas para obterem as certificações dos padrões de Qualidade. Surge assim a necessidade por parte das empresas de se guiarem por essas normas. Considerando o avanço tecnológico e as exigências dos seus parceiros, organizações como a International Organisation for Standardisation, ISO, e a International Automotive Task Force, IATF, têm revisto as suas normas ao longo dos anos, garantindo que se enquadram ao mundo competitivo que vivemos. Neste projeto de dissertação foi dada especial atenção as normas ISO 9001:2015, ISO 31000:2018 e IATF 16949:2016, uma vez que este projeto deverá responder a requerimentos presentes nelas. Neste projeto, procedeu-se à aplicação do modelo de gestão de riscos proposto na norma ISO 31000:2018, contextualizando-o ao contexto industrial da Continental Mabor Indústria de Pneus, CMIP. Durante a aplicação do modelo, foi realizado o Risk Assessment a seis sistemas presente no Manufacturing Execution System, MES, onde foram identificados e avaliados os riscos inerentes a eles. Após esta avaliação foi realizada a definição de ações de tratamento, tendo sido definido a necessidade de atualizar ou criar novos planos de contingência, respondendo assim a um requerimento da IATF 16949:2016. Em paralelo com o Risk Assessment foi também executada a atualização do pontos relacionado com a Direção de Tecnologia e Informação, DTI, no plano de contingência da CMIP. As técnicas utilizadas para a exposição dos resultados foram a análise de modos de falhas e efeitos (FMEA) e o Registo de riscos (RR). Como conclusão deste projeto foram apontadas as vantagens e desvantagens das técnicas aplicadas, assim como foram identificadas as suas limitações. A implementação de um processo de gestão de riscos, permitiu à organização agir de forma preventiva no processo de produção.Since we live in a competitive world, it is necessary to progress side by side with the growing evolution of the market. Therefore, organisations need to be increasingly flexible and have a solid ability to adapt to changes. Therefore, companies are interested in analysing the internal and external factors that could compromise the continuity of the business. Furthermore, their customers and partners want assurance of the fulfilment their requirements, which results in the companies obtaining the certifications of the Quality standards to provide evidence to their partners and customers. Considering technological advances and the demands of their partners, organisations such as International Organisation for Standardisation (ISO) and International Automotive Task Force (IATF) have revised their standards over the years, ensuring that they fit the competitive world in which we live. In this dissertation project, special attention was given to the ISO 9001:2015, ISO 31000:2018, and IATF 16949:2016 standards since this project must respond to the requirements. In this project, the risk management model proposed in the ISO 31000:2018 standard was applied, contextualising it to the industrial context of CMIP. During the application of the model, six systems present in the MES were the target of the Risk Assessment, where the identification and evaluation of their inherent risks occur. After this evaluation, the next stage was the Risk Treatment, where the definition of the need to update or create new contingency plans occurred, thus responding to a requirement of IATF 16949:2016. In parallel with the Risk Assessment, updating the points related to the DTI in the CMIP contingency plan was also carried out. The techniques used to present the results were the failure mode and effect analysis (FMEA) and the Risk Register (RR). The conclusion of this project debated the advantages and disadvantages of the techniques applied and their limitations. Implementing a risk management process, allowed the organisation to act preventively in the production process

    Local conflict and development projects in Indonesia : part of the problem or part of a solution ?

    Get PDF
    Drawing on an integrated mixed methods research design, the authors explore the dynamics of the development-conflict nexus in rural Indonesia, and the specific role of development projects in shaping the nature, extent, and trajectories of"everyday"conflicts. They find that projects that give inadequate attention to dispute resolution mechanisms in many cases stimulate local conflict, either through the injection of development resources themselves or less directly by exacerbating preexisting tensions in target communities. But projects that have explicit and accessible procedures for managing disputes arising from the development process are much less likely to lead to violent outcomes. The authors argue that such projects are more successful in addressing project-related conflicts because they establish direct procedures (such as forums, facilitators, and complaints mechanisms) for dealing with tensions as they arise. These direct mechanisms are less successful in addressing broader social tensions elicited by, or external to, the development process, though program mechanisms can ameliorate conflict indirectly through changing norms and networks of interaction.Post Conflict Reintegration,Development Economics&Aid Effectiveness,Education and Society,Rural Poverty Reduction,Population Policies

    Towards next generation WLANs: exploiting coordination and cooperation

    Get PDF
    Wireless Local Area Networks (WLANs) operating in the industrial, scientific and medical (ISM) radio bands have gained great popularity and increasing usage over the past few years. The corresponding MAC/PHY specification, the IEEE 802.11 standard, has also evolved to adapt to such development. However, as the number of WLAN mobile users increases, and as their needs evolve in the face of new applications, there is an ongoing need for the further evolution of the IEEE 802.11 standard. In this thesis we propose several MAC/PHY layer protocols and schemes that will provide more system throughput, lower packet delivery delay and lessen the power consumption of mobile devices. Our work investigates three approaches that lead to improved WLAN performance: 1) cross-layer design of the PHY and MAC layers for larger system throughput, 2) exploring the use of implicit coordination among clients to increase the efficiency of random media access, and 3) improved packets dispatching by the access points (APs) to preserve the battery of mobile devices. Each proposed solution is supported by theoretical proofs and extensively studied by simulations or experiments on testbeds

    Risk governance in agriculture

    Get PDF
    This paper identifies and assesses the efficiency of major modes for risk governance in agriculture on the base of Bulgarian dairy farming. Firstly, the New Institutional and Transaction Costs Economics is incorporated and a framework for analysis of the governance of natural, market, private, and social (institutional) risks presented. Next, the pace and challenges of the dairy farming development during the post-communist transition and EU integration is outlined. Third, major types of risks faced by the dairy farms are specified, and the dominant market, private, public and hybrid modes of risk governance assessed. Finally, principal forms of risks caused by the dairy farms are identified, and efficiency and impacts of governing structure assessed. Development of Bulgarian dairy farming has been associated with quite specific risk structures facing by and causing from this important sector of agriculture. The huge market and institutional instability and uncertainty, and the high transaction costs, have blocked evolution of effective market and collective modes for risk protection. A great variety of private modes (internal organization, vertical integration, interlinking etc.) has emerged to deal with the significant natural, market, private, and social risks faced by the dairy farms and other affected agents. Nevertheless, diverse risks associated with the dairy farming have not been effectively governed and persist during the transition now. That has been a consequence of ineffective public (Government, international assistance) intervention to correct market and private sector failures in risk governance. The later has had considerable negative impacts on evolution of size, productivity, and sustainability of farms, development of markets, structure of production and consumption, state of environment etc. What is more, certain risks related to the dairy sector have “disappeared” due to the lack of effective risk governance and declining dairy farming. That would lead to further deformation in development of dairy and related sectors unless effective public (regulations, assistance, control etc.) measures are taken to mitigate the existing problems and risks.natural, market, private, and institutional risk management; governance; dairy farming; transition; CAP implementation; new institutional economics, Bulgaria

    Rediscovering Fiscal Policy Through Minskyan Eyes

    Get PDF
    Recent developments in macroeconomic policy, both in terms of theory and practice, have elevated monetary policy while fiscal policy has been downgraded. The latter is rarely mentioned in policy discussion, apart from arguing to place limits on budget deficits and fiscal variables. This paper presents the opposite view of Hyman P. Minsky. Rejecting the orthodox assumptions of unbounded individual and collective rationality, Minsky places uncertainty and financial instability at the centre of his analysis. The limits of individual and collective rationality feed each other, generating deviation-amplifying mechanisms that make the economy unstable. The last one thus assumes a cyclical behaviour that drives it from the torrid summers of speculative booms to the gloomy winters of financial crises, debt deflations and deep depressions. Even if Minsky is generally considered as one of the main interpreters of Keynes, according to this work his economics is very different from Keynes’s one in terms both of business cycles and of growth. In comparison with the Keynesian tradition, according to Minsky fiscal policy is even more important and effective. Government intervention is not only necessary to reach and maintain full employment; it is also indispensable to contain capitalism’s instability and to avoid the disaster. The effect of fiscal policy is not only to underpin and stabilize aggregate demand, income and employment. It has also the task to protect the robustness of the financial system by stabilizing profits and by issuing government bonds. The opening up of the economy may increase its fragility, making fiscal policy even more important. The unprecedented growth of the domestic and international financial transactions, as well as the recent financial turmoil, confirm the validity of Minsky’s insights and make his views on fiscal policy even more noteworthy and fruitful.Minsky, bounded rationality, business cycles, financial instability, fiscal policy.

    Exploiting task-based programming models for resilience

    Get PDF
    Hardware errors become more common as silicon technologies shrink and become more vulnerable, especially in memory cells, which are the most exposed to errors. Permanent and intermittent faults are caused by manufacturing variability and circuits ageing. While these can be mitigated once they are identified, their continuous rate of appearance throughout the lifetime of memory devices will always cause unexpected errors. In addition, transient faults are caused by effects such as radiation or small voltage/frequency margins, and there is no efficient way to shield against these events. Other constraints related to the diminishing sizes of transistors, such as power consumption and memory latency have caused the microprocessor industry to turn to increasingly complex processor architectures. To solve the difficulties arising from programming such architectures, programming models have emerged that rely on runtime systems. These systems form a new intermediate layer on the hardware-software abstraction stack, that performs tasks such as distributing work across computing resources: processor cores, accelerators, etc. These runtime systems dispose of a lot of information, both from the hardware and the applications, and offer thus many possibilities for optimisations. This thesis proposes solutions to the increasing fault rates in memory, across multiple resilience disciplines, from algorithm-based fault tolerance to hardware error correcting codes, through OS reliability strategies. These solutions rely for their efficiency on the opportunities presented by runtime systems. The first contribution of this thesis is an algorithmic-based resilience technique, allowing to tolerate detected errors in memory. This technique allows to recover data that is lost by performing computations that rely on simple redundancy relations identified in the program. The recovery is demonstrated for a family of iterative solvers, the Krylov subspace methods, and evaluated for the conjugate gradient solver. The runtime can transparently overlap the recovery with the computations of the algorithm, which allows to mask the already low overheads of this technique. The second part of this thesis proposes a metric to characterise the impact of faults in memory, which outperforms state-of-the-art metrics in precision and assurances on the error rate. This metric reveals a key insight into data that is not relevant to the program, and we propose an OS-level strategy to ignore errors in such data, by delaying the reporting of detected errors. This allows to reduce failure rates of running programs, by ignoring errors that have no impact. The architectural-level contribution of this thesis is a dynamically adaptable Error Correcting Code (ECC) scheme, that can increase protection of memory regions where the impact of errors is highest. A runtime methodology is presented to estimate the fault rate at runtime using our metric, through performance monitoring tools of current commodity processors. Guiding the dynamic ECC scheme online using the methodology's vulnerability estimates allows to decrease error rates of programs at a fraction of the redundancy cost required for a uniformly stronger ECC. This provides a useful and wide range of trade-offs between redundancy and error rates. The work presented in this thesis demonstrates that runtime systems allow to make the most of redundancy stored in memory, to help tackle increasing error rates in DRAM. This exploited redundancy can be an inherent part of algorithms that allows to tolerate higher fault rates, or in the form of dead data stored in memory. Redundancy can also be added to a program, in the form of ECC. In all cases, the runtime allows to decrease failure rates efficiently, by diminishing recovery costs, identifying redundant data, or targeting critical data. It is thus a very valuable tool for the future computing systems, as it can perform optimisations across different layers of abstractions.Los errores en memoria se vuelven más comunes a medida que las tecnologías de silicio reducen su tamaño. La variabilidad de fabricación y el envejecimiento de los circuitos causan fallos permanentes e intermitentes. Aunque se pueden mitigar una vez identificados, su continua tasa de aparición siempre causa errores inesperados. Además, la memoria también sufre de fallos transitorios contra los cuales no se puede proteger eficientemente. Estos fallos están causados por efectos como la radiación o los reducidos márgenes de voltaje y frecuencia. Otras restricciones coetáneas, como el consumo de energía y la latencia de la memoria, obligaron a las arquitecturas de computadores a volverse cada vez más complejas. Para programar tales procesadores, se desarrollaron modelos de programación basados en entornos de ejecución. Estos sistemas forman una nueva abstracción entre hardware y software, realizando tareas como la distribución del trabajo entre recursos informáticos: núcleos de procesadores, aceleradores, etc. Estos entornos de ejecución disponen de mucha información tanto sobre el hardware como sobre las aplicaciones, y ofrecen así muchas posibilidades de optimización. Esta tesis propone soluciones a los fallos en memoria entre múltiples disciplinas de resiliencia, desde la tolerancia a fallos basada en algoritmos, hasta los códigos de corrección de errores en hardware, incluyendo estrategias de resiliencia del sistema operativo. La eficiencia de estas soluciones depende de las oportunidades que presentan los entornos de ejecución. La primera contribución de esta tesis es una técnica a nivel algorítmico que permite corregir fallos encontrados mientras el programa su ejecuta. Para corregir fallos se han identificado redundancias simples en los datos del programa para toda una clase de algoritmos, los métodos del subespacio de Krylov (gradiente conjugado, GMRES, etc). La estrategia de recuperación de datos desarrollada permite corregir errores sin tener que reinicializar el algoritmo, y aprovecha el modelo de programación para superponer las computaciones del algoritmo y de la recuperación de datos. La segunda parte de esta tesis propone una métrica para caracterizar el impacto de los fallos en la memoria. Esta métrica supera en precisión a las métricas de vanguardia y permite identificar datos que son menos relevantes para el programa. Se propone una estrategia a nivel del sistema operativo retrasando la notificación de los errores detectados, que permite ignorar fallos en estos datos y reducir la tasa de fracaso del programa. Por último, la contribución a nivel arquitectónico de esta tesis es un esquema de Código de Corrección de Errores (ECC por sus siglas en inglés) adaptable dinámicamente. Este esquema puede aumentar la protección de las regiones de memoria donde el impacto de los errores es mayor. Se presenta una metodología para estimar el riesgo de fallo en tiempo de ejecución utilizando nuestra métrica, a través de las herramientas de monitorización del rendimiento disponibles en los procesadores actuales. El esquema de ECC guiado dinámicamente con estas estimaciones de vulnerabilidad permite disminuir la tasa de fracaso de los programas a una fracción del coste de redundancia requerido para un ECC uniformemente más fuerte. El trabajo presentado en esta tesis demuestra que los entornos de ejecución permiten aprovechar al máximo la redundancia contenida en la memoria, para contener el aumento de los errores en ella. Esta redundancia explotada puede ser una parte inherente de los algoritmos que permite tolerar más fallos, en forma de datos inutilizados almacenados en la memoria, o agregada a la memoria de un programa en forma de ECC. En todos los casos, el entorno de ejecución permite disminuir los efectos de los fallos de manera eficiente, disminuyendo los costes de recuperación, identificando datos redundantes, o focalizando esfuerzos de protección en los datos críticos.Postprint (published version

    Security Risk Management - Approaches and Methodology

    Get PDF
    In today’s economic context, organizations are looking for ways to improve their business, to keep head of the competition and grow revenue. To stay competitive and consolidate their position on the market, the companies must use all the information they have and process their information for better support of their missions. For this reason managers have to take into consideration risks that can affect the organization and they have to minimize their impact on the organization. Risk management helps managers to better control the business practices and improve the business process
    corecore