376 research outputs found

    A Study on the Effect of Design Factors of Slim Keyboard's Tactile Feedback

    Get PDF
    With the rapid development of computer technology, the design of computers and keyboards moves towards a trend of slimness. The change of mobile input devices directly influences users' behavior. Although multi-touch applications allow entering texts through a virtual keyboard, the performance, feedback, and comfortableness of the technology is inferior to traditional keyboard, and while manufacturers launch mobile touch keyboards and projection keyboards, the performance has not been satisfying. Therefore, this study discussed the design factors of slim pressure-sensitive keyboards. The factors were evaluated with an objective (accuracy and speed) and a subjective evaluation (operability, recognition, feedback, and difficulty) depending on the shape (circle, rectangle, and L-shaped), thickness (flat, 3mm, and 6mm), and force (35±10g, 60±10g, and 85±10g) of the keyboard. Moreover, MANOVA and Taguchi methods (regarding signal-to-noise ratios) were conducted to find the optimal level of each design factor. The research participants, by their typing speed (30 words/ minute), were divided in two groups. Considering the multitude of variables and levels, the experiments were implemented using the fractional factorial design. A representative model of the research samples were established for input task testing. The findings of this study showed that participants with low typing speed primarily relied on vision to recognize the keys, and those with high typing speed relied on tactile feedback that was affected by the thickness and force of the keys. In the objective and subjective evaluation, a combination of keyboard design factors that might result in higher performance and satisfaction was identified (L-shaped, 3mm, and 60±10g) as the optimal combination. The learning curve was analyzed to make a comparison with a traditional standard keyboard to investigate the influence of user experience on keyboard operation. The research results indicated the optimal combination provided input performance to inferior to a standard keyboard. The results could serve as a reference for the development of related products in industry and for applying comprehensively to touch devices and input interfaces which are interacted with people

    A Study on the Effect of Design Factors of Slim Keyboard's Tactile Feedback

    Get PDF
    With the rapid development of computer technology, the design of computers and keyboards moves towards a trend of slimness. The change of mobile input devices directly influences users' behavior. Although multi-touch applications allow entering texts through a virtual keyboard, the performance, feedback, and comfortableness of the technology is inferior to traditional keyboard, and while manufacturers launch mobile touch keyboards and projection keyboards, the performance has not been satisfying. Therefore, this study discussed the design factors of slim pressure-sensitive keyboards. The factors were evaluated with an objective (accuracy and speed) and a subjective evaluation (operability, recognition, feedback, and difficulty) depending on the shape (circle, rectangle, and L-shaped), thickness (flat, 3mm, and 6mm), and force (35±10g, 60±10g, and 85±10g) of the keyboard. Moreover, MANOVA and Taguchi methods (regarding signal-to-noise ratios) were conducted to find the optimal level of each design factor. The research participants, by their typing speed (30 words/ minute), were divided in two groups. Considering the multitude of variables and levels, the experiments were implemented using the fractional factorial design. A representative model of the research samples were established for input task testing. The findings of this study showed that participants with low typing speed primarily relied on vision to recognize the keys, and those with high typing speed relied on tactile feedback that was affected by the thickness and force of the keys. In the objective and subjective evaluation, a combination of keyboard design factors that might result in higher performance and satisfaction was identified (L-shaped, 3mm, and 60±10g) as the optimal combination. The learning curve was analyzed to make a comparison with a traditional standard keyboard to investigate the influence of user experience on keyboard operation. The research results indicated the optimal combination provided input performance to inferior to a standard keyboard. The results could serve as a reference for the development of related products in industry and for applying comprehensively to touch devices and input interfaces which are interacted with people

    On model selection from a finite family of possibly misspecified time series models

    Get PDF
    Consider finite parametric time series models. “I have n observations and k models, which model should I choose on the basis of the data alone” is a frequently asked question in many practical situations. This poses the key problem of selecting a model from a collection of candidate models, none of which is necessarily the true data generating process (DGP). Although existing literature on model selection is vast, there is a serious lacuna in that the above problem does not seem to have received much attention. In fact, existing model selection criteria have avoided addressing the above problem directly, either by assuming that the true DGP is included among the candidate models and aiming at choosing this DGP, or by assuming that the true DGP can be asymptotically approximated by an increasing sequence of candidate models and aiming at choosing the candidate having the best predictive capability in some asymptotic sense. In this article, we propose a misspecification-resistant information criterion (MRIC) to address the key problem directly. We first prove the asymptotic efficiency of MRIC whether the true DGP is among the candidates or not, within the fixed-dimensional framework. We then extend this result to the high-dimensional case in which the number of candidate variables is much larger than the sample size. In particular, we show that MRIC can be used in conjunction with a high-dimensional model selection method to select the (asymptotically) best predictive model across several high-dimensional misspecified time series models

    Study of Sodium Ion Selective Electrodes and Differential Structures with Anodized Indium Tin Oxide

    Get PDF
    The objective of this work is the study and characterization of anodized indium tin oxide (anodized-ITO) as a sodium ion selective electrode and differential structures including a sodium-selective-membrane/anodized-ITO as sensor 1, an anodized-ITO membrane as the contrast sensor 2, and an ITO as the reference electrode. Anodized-ITO was fabricated by anodic oxidation at room temperature, a low cost and simple manufacture process that makes it easy to control the variation in film resistance. The anodized-ITO based on EGFET structure has good linear pH sensitivity, approximately 54.44 mV/pH from pH 2 to pH 12. The proposed sodium electrodes prepared by PVC-COOH, DOS embedding colloid, and complex Na-TFBD and ionophore B12C4, show good sensitivity at 52.48 mV/decade for 10−4 M to 1 M, and 29.96 mV/decade for 10−7 M to 10−4 M. The sodium sensitivity of the differential sodium-sensing device is 58.65 mV/decade between 10−4 M and 1 M, with a corresponding linearity of 0.998; and 19.17 mV/decade between 10−5 M and 10−4 M

    Robustness in foreign exchange rate forecasting models : economics-based modelling after the financial crisis

    Get PDF
    The aim of this article is to analyse the out-of-sample behaviour of a bunch of statistical and economics-based models when forecasting exchange rates (FX) for the UK, Japan, and the Euro Zone in relation to the US. A special focus is given to the commodity prices boom of 2007-8 and the financial crisis of 2008-9. We analyse the forecasting behaviour of six economic plus three statistical models when forecasting from one up to 60-steps-ahead, using a monthly dataset comprising from 1981.1 to 2014.6. We first analyse forecasting errors until mid-2006 to then compare to those obtained until mid-2014. Our six economics-based models can be classified in three groups: interest rate spreads, monetary fundamentals, and PPP with global measures. Our results indicate that there are indeed changes of the first best models when considering the different spans. Interest rate models tend to be better predicting using the short sample; also showing a better tracking when crisis hit. With the longer sample the models based on price differentials are more promising; however, with heterogeneous results across countries. These results are important since shed some light on what model specification use when facing different FX volatility.peer-reviewe

    Robustness in Foreign Exchange Rate Forecasting Models: Economics-based Modelling After the Financial Crisis

    Get PDF
    The aim of this article is to analyse the out-of-sample behaviour of a bunch of statistical and economics-based models when forecasting exchange rates (FX) for the UK, Japan, and the Euro Zone in relation to the US. A special focus is given to the commodity prices boom of 2007-8 and the financial crisis of 2008-9. We analyse the forecasting behaviour of six economic plus three statistical models when forecasting from one up to 60-steps-ahead, using a monthly dataset comprising from 1981.1 to 2014.6. We first analyse forecasting errors until mid-2006 to then compare to those obtained until mid-2014. Our six economics-based models can be classified in three groups: interest rate spreads, monetary fundamentals, and PPP with global measures. Our results indicate that there are indeed changes of the first best models when considering the different spans. Interest rate models tend to be better predicting using the short sample; also showing a better tracking when crisis hit. With the longer sample the models based on price differentials are more promising; however, with heterogeneous results across countries. These results are important since shed some light on what model specification use when facing different FX volatility

    Assessing the diffusion of FinTech innovation in financial industry: using the rough MCDM model

    Get PDF
    We develop a conceptual structure to explore how financial technology (FinTech) innovation is being implemented to deal with vague, inconsistent and ambiguous knowledge in actual world. The structure of this study is built upon the technology, organization, environment (TOE) context, which one uses the concept of multi-criteria estimation to measure the significance of FinTech innovation. We develop an integrated MCDM (multiple criteria decision-making) model through rough set theory help administrators obtain a strategic influence relation map for enhancing performance approaching towards the aspiration value. This model involves three steps: primary, we apply this rough number to define group views which reflect upon experts’ real experiences; second, we use the rough DEMATEL-based ANP-(RDANP) to acquire the rough influential weights and rough influential network relationship map (RINRM) based on this TOE structure and its corresponding attributes; finally, we utilize the rough modified VIKOR with the influence to analyze the gap between the performance value and the aspirated level. The empirical case was originated from financial industry in Taiwan. According to the weighting results the expected benefits, technology integration, and competitive pressure were the most important criteria. Our results also illustrate how FinTech innovation can be used for promoting financial services

    Clinical Study Underestimated Rate of Status Epilepticus according to the Traditional Definition of Status Epilepticus

    Get PDF
    properly cited. Purpose. Status epilepticus (SE) is an important neurological emergency. Early diagnosis could improve outcomes. Traditionally, SE is defined as seizures lasting at least 30 min or repeated seizures over 30 min without recovery of consciousness. Some specialists argued that the duration of seizures qualifying as SE should be shorter and the operational definition of SE was suggested. It is unclear whether physicians follow the operational definition. The objective of this study was to investigate whether the incidence of SE was underestimated and to investigate the underestimate rate. Methods. This retrospective study evaluates the difference in diagnosis of SE between operational definition and traditional definition of status epilepticus. Between July 1, 2012, and June 30, 2014, patients discharged with ICD-9 codes for epilepsy (345.X) in Chia-Yi Christian Hospital were included in the study. A seizure lasting at least 30 min or repeated seizures over 30 min without recovery of consciousness were considered SE according to the traditional definition of SE (TDSE). A seizure lasting between 5 and 30 min was considered SE according to the operational definition of SE (ODSE); it was defined as underestimated status epilepticus (UESE). Results. During a 2-year period, there were 256 episodes of seizures requiring hospital admission. Among the 256 episodes, 99 episodes lasted longer than 5 min, out of which 61 (61.6%) episodes persisted over 30 min (TDSE) and 38 (38.4%) episodes continued between 5 and 30 min (UESE). In the 38 episodes of seizure lasting 5 to 30 minutes, only one episode was previously discharged as SE (ICD-9-CM 345.3). Conclusion. We underestimated 37.4% of SE. Continuing education regarding the diagnosis and treatment of epilepsy is important for physicians
    corecore