17 research outputs found

    A STUDY OF SINGAPORE FEMALE PRIMARY TEACHERS’ SELF-EFFICACY FOR TEACHING SCIENCE

    Get PDF
    According to Bandura, self-efficacy is defined as an individual’s judgment of their capability to organize and execute the courses of action required to attain designated types of performances. It has been proposed that there is a strong relationship between Science teaching efficacy beliefs and Science teaching behaviors (Sarikaya, 2005). Research has shown that the self-efficacy of teachers affects the performance of their students. Female teachers in Singapore primary schools made up more than 80% of the teaching population and with many reports that teachers are shunning Science and that women possess low Science self-efficacy, one would expect that could be the case for Singapore female teachers as well. Despite this, the ‘Trends in International Mathematics and Science Study’ (TIMMS) 2007 reported that the scores of Singapore primary four students were amongst the top internationally and this was not the first time they had achieved such accolade. There was also no significant difference between the boys’ and girls’ results in the TIMMS. The aim of this study is to determine the self-efficacy of Singapore female primary Science teachers relative to their male counterparts (N=80), and identify enablers and barriers faced by high and low efficacy female Science teachers. A mixed methods approach was used in this research. Analysis of the Science Teaching Efficacy Belief Instrument (STEBI - A) revealed that although male teachers reported significantly higher PSTE scores relative to female teachers, an independent samples t-test showed that the difference was not significant. For the STOE, again Male teachers scored higher than females but given the very small difference between the means, the difference was not significant. It is believed that the trend is probably reflective of a phenomenon that male teachers have higher PSTE than their female counterpart but naturally the data does not support this claim. From the STEBI-A scores, four female teachers were selected for a semi-structured interview to explore in depth accounts of Singapore female primary teachers’ attitude towards teaching Science. Recommendations are made to raise self-efficacy of the female teachers and to optimise primary Science teaching in Singapore

    Design for reliability through engineering optimization

    No full text
    The pursuit of Moore’s Law, in term of improving transistor performance, simply by reducing transistor geometry (e.g. oxide thickness reduction, gate length reduction etc.) has come under technical challenge of meeting the performance requirement since the 130nm technology node. (e.g. copper interconnect was first introduced for 130nm technology node) To sustain Moore’s Law, manufacturers have gone beyond the conventional geometry scaling into “artificial” scaling by using new materials in order to fabricate transistor with improved performance. Besides the use of new materials to improve the transistor performance from one technology to another, the need to pack more transistors into a given space has also pushed the technology geometry to the nanoscale dimension which resulted in very stringent requirements in the manufacturability and variation control capability of the fabrication tools. To keep pace and drive for success in this dynamic and challenging environment, there is a need for a change in the conventional method of developing, qualifying and controlling a technology so as to optimize the limited resources of time and cost as we pushed for fast time-to-market without compromising on reliability and quality. For older technology where the materials properties are well-understood and having good process margin for manufacturing, technology qualifications usually focus on the intrinsic aspect of process reliability. In most cases, the intrinsic process reliability is tested, qualified and subsequently monitored on test structures from limited qualification and monitoring lots. However for advanced technology with narrow process margin, the actual reliability performance must be evaluated using large volume of data so that the impact of variation on reliability can be studied and understood as early as possible in the development phase. To ensure robust reliability especially for advanced technology, it is critical to have a more proactive approach in understand the intrinsic reliability performance, the impact of process variation on reliability and to control the key process parameters that are affecting reliability before actual product qualification. This proactive approach will help to prevent situations where products in the field fail reliability due to narrow reliability margin worsen by process variation resulting from unique product design to process sensitivity. The proactive approach also helps to prevent process changes due to reliability failures from being introduced late in the development phase to improve process and reliability margin will have a significant impact to the time-to-market and increase the development cost. This work proposes a paradigm change in reliability qualification and monitoring methodology aim at established a linkage between the variation in the key process parameters and product reliability performance. Through this approach the objective is to enable the study of reliability robustness using a systematic approach from the early technology development phase to the high volume manufacturing phase of the product in an effective and efficient manner. To establish this linkage between the key process parameters and their impact on product reliability, an innovative Design-For-Reliability (DFR) using engineering optimization methodology is proposed. In this method, the key process parameters affecting reliability is investigated through a set of reliability test structure with build-in variation (DFR test structures), to understand the impact of process variation to reliability. Given the constraint of limited space on the test chip and limited testing resources, the DFR test structures allow the collection of large volume of reliability data in an effective and practical manner through the use of fast wafer level reliability testing method such as isothermal electromigration (EM) etc. With the large volume of reliability data being collected from the DFR test structures with known variation build-in, it could be used to model the impact of process variation to reliability in a systematic manner. In this thesis we have shown that using the newly proposed Design-For-Reliability (DFR) through engineering optimization methodology, we have established for the first time a methodology that enables the linkage between the process parameters variation and its impact to reliability to be systematically studied. Using large volume of process parameters data, a DFR Model is established which enable the impact of process variation on reliability to be systematically studied and potential reliability weakness due to process variation to be identified in the factory before delivery to customer. This proactive method of reliability qualification and monitoring to study the impact of process variation on reliability not only represent a paradigm change in the conventional way we looked at reliability, but more importantly it enables a systematic approach to drive for robust reliability and variation tolerant reliability from the development to the production phase of the technology.Doctor of Philosophy (EEE

    Geographical indications at the crossroads of trade, development, and culture: Focus on Asia-Pacific

    No full text
    Historically, few topics have proven to be so controversial in international intellectual property as the protection of geographical indications (GIs). The adoption of TRIPS in 1994 did not resolve disagreements, and countries worldwide continue to quarrel today as to the nature, the scope, and the enforcement of GI protection nationally and internationally. Thus far, however, there is little literature addressing GI protection from the point of view of the Asia-Pacific region, even though countries in this region have actively discussed the topic and in several instances have promoted GIs as a mechanism to foster local development and safeguard local culture. This book, edited by renowned intellectual property scholars, fills the void in the current literature and offers a variety of contributions focusing on the framework and effects of GI protection in the Asia-Pacific region. The book is available as Open Access

    The Cambridge Handbook of Copyright Limitations and Exceptions

    No full text
    While copyright law is ordinarily thought to consist primarily of exclusive rights, the regime\u27s various exemptions and immunities from liability for copyright infringement form an integral part of its functioning, and serve to balance copyright\u27s grant of a private benefit to authors/creators with the broader public interest. With contributors from all over the world, this handbook offers a systematic, thorough study of copyright limitations and exceptions adopted in major jurisdictions, including the United States, the European Union, and China. In addition to providing justifications for these limitations, the chapters compare differences and similarities that exist in major jurisdictions and offer suggestions about how to improve the enforcement of copyright limitations domestically and globally. This work should appeal to scholars, policymakers, attorneys, teachers, judges, and students with an interest in the theories, policies, and doctrines of copyright law.https://scholarship.law.columbia.edu/books/1306/thumbnail.jp

    The Cambridge Handbook of Copyright Limitations and Exceptions

    No full text
    While copyright law is ordinarily thought to consist primarily of exclusive rights, the regime\u27s various exemptions and immunities from liability for copyright infringement form an integral part of its functioning, and serve to balance copyright\u27s grant of a private benefit to authors/creators with the broader public interest. With contributors from all over the world, this handbook offers a systematic, thorough study of copyright limitations and exceptions adopted in major jurisdictions, including the United States, the European Union, and China. In addition to providing justifications for these limitations, the chapters compare differences and similarities that exist in major jurisdictions and offer suggestions about how to improve the enforcement of copyright limitations domestically and globally. This work should appeal to scholars, policymakers, attorneys, teachers, judges, and students with an interest in the theories, policies, and doctrines of copyright law.https://scholarship.law.columbia.edu/books/1306/thumbnail.jp

    A novelty detection machine and its application to bank failure prediction

    No full text
    Novelty detection has been well-studied for many years and has found a wide range of applications, but correctly identifying the outliers is still a hard problem because of the diverse variation and the small quantity of such outliers. We address the problem using several distinct characteristics of the outliers and the normal patterns. First, normal patterns are usually grouped together, forming clusters in the high density regions of the data space. Second, outliers are characteristically very different from the normal patterns, and hence tend to be located far away from the normal patterns in the data space. Third, the number of outliers is generally very small in a given dataset. Based on these observations, we can envisage that the appropriate decision boundary segregating the outliers and the normal patterns usually lies in some low density regions of the data space. This is referred to as cluster assumption. The resultant optimization problem to learn the decision function can be solved using the mixed integer programming approach. Following that, we present a cutting plane algorithm together with a multiple kernel learning technique to solve the convex relaxation of the optimization problem. Specifically, we make use of the scarcity of the outliers to find a violating solution to the cutting plane algorithm. Experimental results with several benchmark datasets show that our proposed novelty detection method outperforms existing hyperplane and density estimation-based novelty detection techniques. We subsequently apply our method to the prediction of banking failures to identify potential bank failures or high risk banks through the traits of financial distress

    Volumetric modulated arc therapy in prostate cancer patients with metallic hip prostheses in a UK centre

    No full text
    AimThis study aimed to investigate whether IMRT using VMAT is a viable and safe solution in dose escalated RT in these patients.BackgroundAn increasing number of prostate cancer patients are elderly and have hip prostheses. These implants pose challenges in radiotherapy treatment planning. Although intensity modulated radiotherapy (IMRT) is commonly used, there is a lack of clinical studies documenting its efficacy and toxicities in this subgroup of patients.Materials and methodsThe data from 23 patients with hip prostheses and non-metastatic prostate cancer treated with VMAT (volumetric modulated arc therapy) between 2009 and 2011, were retrospectively analyzed. Baseline characteristics, treatment details and outcome data were collected on all patients. The median follow up was 40.9 months. MRI-CT image fusion was performed and the treatment plans were created using RapidArc™ (RA) techniques utilizing 1 or 2 arcs and 10[[ce:hsp sp="0.25"/]]MV photon beams.Results96% of patients were treated with a dose of 72[[ce:hsp sp="0.25"/]]Gy/32 fractions over 44 days. 21/23 plans met the PTV targets. The mean homogeneity index was 1.07. 20/23 plans met all OAR constraints (rectum, bladder). Two plans deviated from rectal constraints, four from bladder constraints; all were classed as minor deviations. One patient experienced late grade 3 genitourinary toxicity. Three other patients experienced late grade 2 or lower gastrointestinal toxicity. One patient had biochemical failure and one had a non-prostate cancer related death.ConclusionsVMAT provides an elegant solution to deliver dose escalated RT in patients with unilateral and bilateral hip replacements with minimal acute and late toxicities

    Robust electromigration reliability through engineering optimization

    No full text
    With complex process integration approach and severe fabrication limitations caused by introduction of new materials and diminishing process margins, there are mounting concerns with the increased failure rate at the early life cycle (e.g.<1 year operation) of product application known as infant mortality failures. A paradigm change in reliability qualification methodology aim at understanding the impact of variation on reliability is required to ensure reliability robustness. Using Electromigration (EM) as an example, this paper described a methodology where the impact of process variation on reliability is studied. A model that predicts the impact of process variation on EM sigma is also proposed which enables variation and its impact on reliability to be quantified. Using this methodology, the critical process parameters impacting reliability could be identified and controlled to ensure reliability robustness.Accepted versio

    An assessment of the magnitude of intra-fraction movement of head-and-neck IMRT cases and its implication on the action-level of the imaging protocol.

    No full text
    BACKGROUND AND PURPOSE: A planning margin ⊽3mm is employed in some head-and-neck IMRT cases due to the proximity of critical structures. This study aims to explore the need to redefine the action-level in the head-and-neck imaging protocol in consideration of the intra-fraction movement. MATERIAL AND METHODS: This is a local study of 18 patients treated using the same immobilisation system and setup protocol. Post-treatment orthogonal pair of kilovoltage X-ray images was acquired on the first three days of treatment. 106 sets of pre- and post-treatment kV X-ray images acquired over 53 fractions were analysed against the treatment planning DRR for calculation of intra-fraction movement. RESULTS: Individual mean intra-fraction movement in all directions ranged from -1.8 to 1.1mm. Population mean (median) intra-fraction movement in the x-, y-, and z-planes were -0.1mm (0mm), -0.3mm (-0.3mm) and -0.2mm (-0.2mm) respectively. Intra-fraction movement in all three dimensions, x-, y- and z-planes were considered statistically significant (p<0.05). 7 out of 53 fractions (13.2%) were highlighted as the combined magnitude of the intra-fraction motion with the uncorrected pre-treatment setup errors had exceeded the boundaries of given margins. CONCLUSIONS: 3mm-AL was not adequate to account for intra-fraction movement when the CTV-PTV margin was ⊽3mm and should be excluded from the routine imaging protocol and daily image-guided radiotherapy should be employed. Adjusting the action-level to 2mm would allow a more confident approach in delivery of the prescribed dose in head-and-neck IMRT cases
    corecore