1,825 research outputs found

    Study of Gender as Social Practice and Tokenism in an Indian IT Company

    Get PDF
    This systematic study is focused on examining the women’s gendered identity work in an Indian IT company. The research builds a body of work that explores female tokenism at senior positions and highlights tension in practicing gender. Research was conducted through semi-structured interviews with twenty two women employees utilizing the case study approach. A patriarchal Indian society and social construction of IT as feminine and rewarding for women is responsible for an increase of women participation in it. However, there is evidence of exclusion at all levels of hierarchy in the firm on accounts of gendering and social practices. There is prevalence of gender based discrimination in the nature of work allocated and the compensation received especially at the middle level. Gender stereotyping is related with such workplace discrimination. Women tokens at the higher level have been unable to influence policy directions in favor of women employees. Frustrated women employees chose passive coping mechanism such as acceptance as part of work culture and social expectations. Clearly there are tensions between gender and social practice in the chosen firm. Company has a significant role to play in nurturing and supporting women employees through a strong support system. Strategies include avoidance of negative connotations, mentoring, provision of work-life balance support initiatives, tough action against harassment, concerns of discrimination and tokenism. Most important issue is awareness in the society which is effective in changing socially constructed beliefs and attitudes related to gender which would go a long way in improving women experience in workplace. Studies critically analyzing IT’s implications on India’s overall social and economic development is scarce. Further, there have been few sociological studies of work focusing on this industry or on its workforce. This study addresses this gap existing in the literature. The study to the best of my knowledge would be the first of its kind in the Indian context as few studies previously conducted on women IT workforce ignored the theoretical perspectives

    Specification of coordination behaviors in software architecture using the Reo coordination language

    Get PDF
    One of the key goals of a software architecture is to help application designers analyze a software system at a higher level of abstraction than implementation. Software architects often use architecture description languages (ADLs) and their supporting tools to specify software architectures. Existing ADLs often lack formal foundations for design, analysis and reconfiguration of software architectures. The Reo language has a strong formal basis and promotes loose coupling, distribution, mobility, exogenous coordination, and dynamic reconfigurability. This thesis focus on assessing the Reo coordination language as an ADL by doing the following work: a) specify a distributed meeting scheduling system using the Reo coordination language; b) assess the Reo coordination language as an ADL using an existing metho

    Bio-inspired FPGA Architecture for Self-Calibration of an Image Compression Core based on Wavelet Transforms in Embedded Systems

    Get PDF
    A generic bio-inspired adaptive architecture for image compression suitable to be implemented in embedded systems is presented. The architecture allows the system to be tuned during its calibration phase. An evolutionary algorithm is responsible of making the system evolve towards the required performance. A prototype has been implemented in a Xilinx Virtex-5 FPGA featuring an adaptive wavelet transform core directed at improving image compression for specific types of images. An Evolution Strategy has been chosen as the search algorithm and its typical genetic operators adapted to allow for a hardware friendly implementation. HW/SW partitioning issues are also considered after a high level description of the algorithm is profiled which validates the proposed resource allocation in the device fabric. To check the robustness of the system and its adaptation capabilities, different types of images have been selected as validation patterns. A direct application of such a system is its deployment in an unknown environment during design time, letting the calibration phase adjust the system parameters so that it performs efcient image compression. Also, this prototype implementation may serve as an accelerator for the automatic design of evolved transform coefficients which are later on synthesized and implemented in a non-adaptive system in the final implementation device, whether it is a HW or SW based computing device. The architecture has been built in a modular way so that it can be easily extended to adapt other types of image processing cores. Details on this pluggable component point of view are also given in the paper

    TRADING GREEN BONDS USING DISTRIBUTED LEDGER TECHNOLOGY

    Get PDF
    The promising markets for voluntary carbon credits are faced with crippling challenges to the certification of carbon sequestration, and the lack of scalable market infrastructure in which institutions can invest in carbon offsetting. This amounts to a funding problem for the agricultural sector, as farmers are unable to access the liquidity needed to fund the transition to sustainable practices. We explore the feasibility of mitigating infrastructural challenges based on a DLT Trading and Settlement System, for ‘green bonds’. The artefact employs a multi-sharded architecture, in which the set of nodes retains carefully orchestrated responsibilities in the functioning of the network. We evaluate the artefact in the supranational context with an EU-based regulator as part of a regulatory sandbox program mandated by the new EU DLT Pilot regime. By conducting design-driven research with stakeholders from industrial and governmental bodies, we contribute to the IS literature on the practical implications of DLT

    Financial Soundness Prediction Using a Multi-classification Model: Evidence from Current Financial Crisis in OECD Banks

    Get PDF
    The paper aims to develop an early warning model that separates previously rated banks (337 Fitch-rated banks from OECD) into three classes, based on their financial health and using a one-year window. The early warning system is based on a classification model which estimates the Fitch ratings using Bankscope bankspecific data, regulatory and macroeconomic data as input variables. The authors propose a “hybridization technique” that combines the Extreme learning machine and the Synthetic Minority Over-sampling Technique. Due to the imbalanced nature of the problem, the authors apply an oversampling technique on the data aiming to improve the classification results on the minority groups. The methodology proposed outperforms other existing classification techniques used to predict bank solvency. It proved essential in improving average accuracy and especially the performance of the minority groups

    Performance Improvements Using Dynamic Performance Stubs

    Get PDF
    This thesis proposes a new methodology to extend the software performance engineering process. Common performance measurement and tuning principles mainly target to improve the software function itself. Hereby, the application source code is studied and improved independently of the overall system performance behavior. Moreover, the optimization of the software function has to be done without an estimation of the expected optimization gain. This often leads to an under- or overoptimization, and hence, does not utilize the system sufficiently. The proposed performance improvement methodology and framework, called dynamic performance stubs, improves the before mentioned insufficiencies by evaluating the overall system performance improvement. This is achieved by simulating the performance behavior of the original software functionality depending on an adjustable optimization level prior to the real optimization. So, it enables the software performance analyst to determine the systems’ overall performance behavior considering possible outcomes of different improvement approaches. Moreover, by using the dynamic performance stubs methodology, a cost-benefit analysis of different optimizations regarding the performance behavior can be done. The approach of the dynamic performance stubs is to replace the software bottleneck by a stub. This stub combines the simulation of the software functionality with the possibility to adjust the performance behavior depending on one or more different performance aspects of the replaced software function. A general methodology for using dynamic performance stubs as well as several methodologies for simulating different performance aspects is discussed. Finally, several case studies to show the application and usability of the dynamic performance stubs approach are presented

    Polarization state manipulation with sub-micron structures

    Get PDF

    Constraint programming for random testing of a trading system

    Full text link
    Financial markets use complex computer trading systems whose failures can cause serious economic damage, making reliability a major concern. Automated random testing has been shown to be useful in nding defects in these systems, but its inherent test oracle problem (automatic generation of the expected system output) is a drawback that has typically prevented its application on a larger scale. Two main tasks have been carried out in this thesis as a solution to the test oracle problem. First, an independent model of a real trading system based on constraint programming, a method for solving combinatorial problems, has been created. Then, the model has been integrated as a true test oracle in automated random tests. The test oracle maintains the expected state of an order book throughout a sequence of random trade order actions, and provides the expected output of every auction triggered in the order book by generating a corresponding constraint program that is solved with the aid of a constraint programming system. Constraint programming has allowed the development of an inexpensive, yet reliable test oracle. In 500 random test cases, the test oracle has detected two system failures. These failures correspond to defects that had been present for several years without being discovered neither by less complete oracles nor by the application of more systematic testing approaches. The main contributions of this thesis are: (1) empirical evidence of both the suitability of applying constraint programming to solve the test oracle problem and the e ectiveness of true test oracles in random testing, and (2) a rst attempt, as far as the author is aware, to model a non-theoretical continuous double auction using constraint programming.Castañeda Lozano, R. (2010). Constraint programming for random testing of a trading system. http://hdl.handle.net/10251/8928.Archivo delegad

    Advanced information processing system: Fault injection study and results

    Get PDF
    The objective of the AIPS program is to achieve a validated fault tolerant distributed computer system. The goals of the AIPS fault injection study were: (1) to present the fault injection study components addressing the AIPS validation objective; (2) to obtain feedback for fault removal from the design implementation; (3) to obtain statistical data regarding fault detection, isolation, and reconfiguration responses; and (4) to obtain data regarding the effects of faults on system performance. The parameters are described that must be varied to create a comprehensive set of fault injection tests, the subset of test cases selected, the test case measurements, and the test case execution. Both pin level hardware faults using a hardware fault injector and software injected memory mutations were used to test the system. An overview is provided of the hardware fault injector and the associated software used to carry out the experiments. Detailed specifications are given of fault and test results for the I/O Network and the AIPS Fault Tolerant Processor, respectively. The results are summarized and conclusions are given
    corecore