414 research outputs found

    Making Decryption Accountable

    Get PDF

    Conceptual model for usable multi-modal mobile assistance during Umrah

    Get PDF
    Performing Umrah is very demanding and to be performed in very crowded environments. In response to that, many efforts have been initiated to overcome the difficulties faced by pilgrims. However, those efforts focus on acquiring initial perspective and background knowledge before going to Mecca. Findings of preliminary study show that those efforts do not support multi-modality for user interaction. Nowadays the computational capabilities in mobile phones enable it to serve people in various aspects of daily life. Consequently, the mobile phone penetration has increased dramatically in the last decade. Hence, this study aims to propose a comprehensive conceptual model for usable multimodal mobile assistance during Umrah called Multi-model Mobile Assistance during Umrah (MMA-U). Thus, four (4) supporting objectives are formulated, and the Design Science Research Methodology has been adopted. For the usability of MMA-U, Systematic Literature Review (SLR) indicates ten (10) attributes: usefulness, errors rate, simplicity, reliability, ease of use, safety, flexibility, accessibility, attitude, and acceptability. Meanwhile, the content and comparative analysis result in five (5) components that construct the conceptual model of MMA-U: structural, content composition, design principles, development approach, technology, and the design and usability theories. Then, the MMA-U has been reviewed and well-accepted by 15 experts. Later, the MMA-U was incorporated into a prototype called Personal Digital Mutawwif (PDM). The PDM was developed for the purpose of user test in the field. The findings indicate that PDM facilitates the execution of Umrah and successfully meet pilgrims’ needs and expectations. Also, the pilgrims were satisfied and felt that they need to have PDM. In fact, they would recommend PDM to their friends, which mean that use of PDM is safe and suitable while performing Umrah. As a conclusion, the theoretical contribution; the conceptual model of MMA-U; provides guidelines for developing multimodal content mobile applications during Umrah

    A holistic method for improving software product and process quality

    Get PDF
    The concept of quality in general is elusive, multi-faceted and is perceived differently by different stakeholders. Quality is difficult to define and extremely difficult to measure. Deficient software systems regularly result in failures which often lead to significant financial losses but more importantly to loss of human lives. Such systems need to be either scrapped and replaced by new ones or corrected/improved through maintenance. One of the most serious challenges is how to deal with legacy systems which, even when not failing, inevitably require upgrades, maintenance and improvement because of malfunctioning or changing requirements, or because of changing technologies, languages, or platforms. In such cases, the dilemma is whether to develop solutions from scratch or to re-engineer a legacy system. This research addresses this dilemma and seeks to establish a rigorous method for the derivation of indicators which, together with management criteria, can help decide whether restructuring of legacy systems is advisable. At the same time as the software engineering community has been moving from corrective methods to preventive methods, concentrating not only on both product quality improvement and process quality improvement has become imperative. This research investigation combines Product Quality Improvement, primarily through the re-engineering of legacy systems; and Process Improvement methods, models and practices, and uses a holistic approach to study the interplay of Product and Process Improvement. The re-engineering factor rho, a composite metric was proposed and validated. The design and execution of formal experiments tested hypotheses on the relationship of internal (code-based) and external (behavioural) metrics. In addition to proving the hypotheses, the insights gained on logistics challenges resulted in the development of a framework for the design and execution of controlled experiments in Software Engineering. The next part of the research resulted in the development of the novel, generic and, hence, customisable Quality Model GEQUAMO, which observes the principle of orthogonality, and combines a top-down analysis of the identification, classification and visualisation of software quality characteristics, and a bottom-up method for measurement and evaluation. GEQUAMO II addressed weaknesses that were identified during various GEQUAMO implementations and expert validation by academics and practitioners. Further work on Process Improvement investigated the Process Maturity and its relationship to Knowledge Sharing, resulted in the development of the I5P Visualisation Framework for Performance Estimation through the Alignment of Process Maturity and Knowledge Sharing. I5P was used in industry and was validated by experts from academia and industry. Using the principles that guided the creation of the GEQUAMO model, the CoFeD visualisation framework, was developed for comparative quality evaluation and selection of methods, tools, models and other software artifacts. CoFeD is very useful as the selection of wrong methods, tools or even personnel is detrimental to the survival and success of projects and organisations, and even to individuals. Finally, throughout the many years of research and teaching Software Engineering, Information Systems, Methodologies, I observed the ambiguities of terminology and the use of one term to mean different concepts and one concept to be expressed in different terms. These practices result in lack of clarity. Thus my final contribution comes in my reflections on terminology disambiguation for the achievement of clarity, and the development of a framework for achieving disambiguation of terms as a necessary step towards gaining maturity and justifying the use of the term “Engineering” 50 years since the term Software Engineering was coined. This research resulted in the creation of new knowledge in the form of novel indicators, models and frameworks which can aid quantification and decision making primarily on re-engineering of legacy code and on the management of process and its improvement. The thesis also contributes to the broader debate and understanding of problems relating to Software Quality, and establishes the need for a holistic approach to software quality improvement from both the product and the process perspectives

    Partially-Fair Computation from Timed-Release Encryption and Oblivious Transfer

    Get PDF
    We describe a new protocol to achieve two party Ï”\epsilon-fair exchange: at any point in the unfolding of the protocol the difference in the probabilities of the parties having acquired the desired term is bounded by a value Ï”\epsilon that can be made as small as necessary. Our construction uses oblivious transfer and sidesteps previous impossibility results by using a timed-release encryption, that releases its contents only after some lower bounded time. We show that our protocol can be easily generalized to an Ï”\epsilon-fair two-party protocol for all functionalities. To our knowledge, this is the first protocol to truly achieve Ï”\epsilon-fairness for all functionalities. All previous constructions achieving some form of fairness for all functionalities (without relying on a trusted third party) had a strong limitation: the fairness guarantee was only guaranteed to hold if the honest parties are at least as powerful as the corrupted parties and invest a similar amount of resources in the protocol, an assumption which is often not realistic. Our construction does not have this limitation: our protocol provides a clear upper bound on the running time of all parties, and partial fairness holds even if the corrupted parties have much more time or computational power than the honest parties. Interestingly, this shows that a minimal use of timed-release encryption suffices to circumvent an impossibility result of Katz and Gordon regarding Ï”\epsilon-fair computation for all functionalities, without having to make the (unrealistic) assumption that the honest parties are as computationally powerful as the corrupted parties - this assumption was previously believed to be unavoidable in order to overcome this impossibility result. We present detailed security proofs of the new construction, which are non-trivial and form the core technical contribution of this work

    The African court on human and peoples’ rights: a test of African notions of human rights and justice

    Get PDF
    Doctor Legum - LLDThe African Court on Human and Peoples’ Right (the Court) is the most recent of the three regional Human Rights Bodies. Envisioned by the African Charter on Human and Peoples’ Right, its structures was not planned until the Organisation of African Unity (OAU) promulgated a protocol for its creation in 1998. The Court complements the protective mandate of the African Commission on Human and Peoples’ Rights (‘The Commission’) and the Court has the competence to take final and binding decisions on human rights violations. Unlike its European and inter-American versions where their courts are integral parts of the cardinal instrument of the system ab initio, the establishment of the African Court was merely an afterthought. At the initial, protection of rights rested solely with the Commission upon African justice system which emphasises reconciliation as it is non-confrontational method of settlements of. The Commission is a quasi-judicial body modelled after the United Nations Human Right Committee without binding powers and with only limited functions covering examination of State reports, communications alleging violations and interpreting the Charter at the request of a State, the OAU or any organisation recognised by the OAU. The thesis answers the question whether the adoption of the African Court means that the African model of enforcing human rights has failed or whether having the Court constitute a concession to the triumph of the western model of law enforcement. The imperative of the 30th Ordinary Session of the OAU in 1994 where the creation of an African Court of Human and Peoples’ Rights was viewed as the best way of protecting human rights across the region would be treated. The relevance of such an examination is highlighted by the fact that the African Charter did not make any provision for the establishment of a Court to enforce the rights guaranteed thereunder. If we are to assume that justice by reconciliation has failed and should be replaced by or complimented with justice by adjudication as the primary means of conflict resolution, what guarantees are there that the latter form of justice will not also fail? This thesis therefore will critically evaluate the African Court on Human and Peoples’ Rights and assessed its potential impact on the African human rights system. It will also probe the power of the Court and see whether a clear and mutually reinforcing division of labour between it and the African Commission can be developed to promote and protect human rights on the continent. This research brings to focus an area that requires attention if the African human rights regime is to be effective. It put to test the criticism against the African Charter and the Protocol to the African Charter on the Establishment of an African Court on Human and Peoples’ Rights and also identified the present existing flaws in the African regional system. Furthermore, it ascertained whether or not, given the availability of other options, a regional Court is, in fact, the ideal mechanism for the protection of human rights in Africa
    • 

    corecore