410 research outputs found

    New and old social risks in Korean social policy: the case of the National Pension Scheme

    Get PDF
    This is a study of old and new social risks in Korean social policy, in relation to the National Pension Scheme (NPS). It provides a comprehensive overview of the Korean pension structure and the emergence of new social risk groups. Based on the Korean Labour and Income Panel Study undertaken over eleven years and using bivariate and multivariate analysis, this thesis examines the effectiveness of the NPS and its reforms in protecting new social risk groups. The analytical framework of this thesis is based on the New Social Risk theory. Its limitation in explaining developing welfare states like Korea is also highlighted. Over the past two decades, the NPS has undergone dramatic financial cuts as its coverage expands rapidly. Given Koreaโ€™s aging population, the reliance on such public schemes will further increase, which will have a profound impact particularly, on those with low income. Societal and economic changes in the Korean society, as a result of de-industrialisation, have given rise to new social risks groups that differ from those that predominate in the post-war welfare era. These new groups are vulnerable because they cannot afford to contribute to their pension even during their working life with the likelihood that they will have little or no benefit from the NPS when they retire. They tend to be the atypical contract holders and workers of small-scale enterprises without unions. Contrary to expectation, women with care responsibility and young workers are less vulnerable

    Dynamic Self-Assembly and 3D Fluidic Trap in Rotating Fluids

    Get PDF
    Department of ChemistryThis thesis describes a dynamical system enabling the formation of ordered structures and particle trapping established in non inertial frame of reference. Lighter particles suspended in a denser rotating fluid filling a cylindrical tube can be localized on the axis of rotation by the axis symmetric centripetal force. Such particles can self assemble into ordered tubular structures with mild rotational accelerations and the types of the resulting packings are dependent on the particle???s concentration. We demonstrated various tubular structures of symmetries ranging from simple helix (????1) to double helix, ????2????, and ????3????. Using the fact that there exists a transient flow along the axis when the rotational rates are abruptly accelerated or decelerated, the structures can interconvert into each other by quickly changing the rotational rates. In addition, the chirality for the chiral structures can be selective when the system???s orientation is adjusted with respect to gravity. We also report unprecedented binary tubular structures by using two types of particles differing in density and/or size. On the other hand, when the particles confined in the axis symmetric potential can experience confinements along the axis of rotation, the particles can be trapped and assemble into 3D ordered structures. We realized the axial confinement with disks which are fitted inside the tube and rotate slower than the surrounding fluids by the external magnetic field (the so called eddy current brake), giving rise to vortices. The strength of the axial confinement by the vortices is inversely proportional to the speed of the disk relative to the fluids. When such two disks are placed near each of the both ends of the tube, the particles can be trapped between the disks if the rotational rates of the tube and the disks are properly adjusted. Depending on the relative strengths of the radial and axial confinements, the trapped particles can exhibit the orbiting trajectory, linear assembly, or ordered packing. Similarly, particle clusters can have geometries of prolate, oblate or spherical symmetric shapes. We showed various types of ordered packings using a few spherical particles and demonstrated various dynamic assemblies of cages and interlocked architectures with non spherical particles as well as jammed colloidal monolith. As we obtained unusual binary tubular structures by using different types of particles, certain combinations of the particles can exhibit structural selectivity for polymorphic system. Last but not least, our system is found to undergo interesting transition behavior such as Hopf bifurcation and others including two limit cycles.clos

    Process Safety Management: Optimized Models Influenced by Organization Culture

    Get PDF
    Companies focus on Process Safety Management (PSM) in order to protect employees and facilities from an accidents, such as explosion and fire. The most elements of PSM are closely related to employees, which determine the organizational culture, and organizational culture directly affects safety culture. Companies put an effort to have a strong safety culture, which is behaviors and responses in regard to emergency and abnormal situation. In this paper, definition and essential theories of PSM were reviewed first, and safety culture in PSM and the safety culture of Indianapolis Power & Light (IPL) were discussed. A method used in IPL was to conduct the safety survey to evaluate their safety culture. To understand the safety culture in Rose-Hulman Institute of Technology (RHIT), two similar safety surveys were performed. The first survey was to understand studentsโ€™ perceptions about Personal Protective Equipment (PPE) in the laboratory, and the second survey was to study what type of methods are used for the safety training in companies and find current safety problems and solutions of the Chemical Engineering Unit Operations laboratory. Based on the results of the surveys, the safety culture of RHIT was analyzed and possible solutions were suggested

    Regularization and Kernelization of the Maximin Correlation Approach

    Full text link
    Robust classification becomes challenging when each class consists of multiple subclasses. Examples include multi-font optical character recognition and automated protein function prediction. In correlation-based nearest-neighbor classification, the maximin correlation approach (MCA) provides the worst-case optimal solution by minimizing the maximum misclassification risk through an iterative procedure. Despite the optimality, the original MCA has drawbacks that have limited its wide applicability in practice. That is, the MCA tends to be sensitive to outliers, cannot effectively handle nonlinearities in datasets, and suffers from having high computational complexity. To address these limitations, we propose an improved solution, named regularized maximin correlation approach (R-MCA). We first reformulate MCA as a quadratically constrained linear programming (QCLP) problem, incorporate regularization by introducing slack variables in the primal problem of the QCLP, and derive the corresponding Lagrangian dual. The dual formulation enables us to apply the kernel trick to R-MCA so that it can better handle nonlinearities. Our experimental results demonstrate that the regularization and kernelization make the proposed R-MCA more robust and accurate for various classification tasks than the original MCA. Furthermore, when the data size or dimensionality grows, R-MCA runs substantially faster by solving either the primal or dual (whichever has a smaller variable dimension) of the QCLP.Comment: Submitted to IEEE Acces

    After-Hours Block Trading, Short Sales, And Information Leakage: Evidence From Korea

    Get PDF
    We investigate the impact of insider trading in after-hours block market on stock price and short sales volume, before and after the trading becomes public information. During pre-announcement period, positive (negative) abnormal stock return is generated when insiders buy (sell) their shares but does not when quasi-insiders trade, implying that stock price reflects long-lived private information of corporate governance structure. The impact is most prominent when ownership shares are transferred to (from) corporate insiders. In contrast, short sales volume generally does not depend on the identity of block holders. Short sales volume has a negative correlation with abnormal stock return only during the transaction date, indicating that a short-sale decision of tippees is based on their sole expectation on instantaneous stock returns. We also find evidence that insiders select the timing of their trades with respect to maximizing their realized profits or minimizing their purchasing costs.

    ๊นŠ์€ ์‹ ๊ฒฝ๋ง์„ ์ด์šฉํ•œ ๊ฐ•์ธํ•œ ํŠน์ง• ํ•™์Šต

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2016. 8. ์œค์„ฑ๋กœ.์ตœ๊ทผ ๊ธฐ๊ณ„ ํ•™์Šต์˜ ๋ฐœ์ „์œผ๋กœ ์ธ๊ณต ์ง€๋Šฅ์€ ์šฐ๋ฆฌ์—๊ฒŒ ํ•œ ๊ฑธ์Œ ๋” ๊ฐ€๊นŒ์ด ๋‹ค๊ฐ€์˜ค๊ฒŒ ๋˜์—ˆ๋‹ค. ํŠนํžˆ ์ž์œจ ์ฃผํ–‰์ด๋‚˜ ๊ฒŒ์ž„ ํ”Œ๋ ˆ์ด ๋“ฑ ์ตœ์‹  ์ธ๊ณต ์ง€๋Šฅ ํ”„๋ ˆ์ž„์›Œํฌ๋“ค์— ์žˆ์–ด์„œ, ๋”ฅ ๋Ÿฌ๋‹์ด ์ค‘์š”ํ•œ ์—ญํ• ์„ ํ•˜๊ณ  ์žˆ๋Š” ์ƒํ™ฉ์ด๋‹ค. ๋”ฅ ๋Ÿฌ๋‹์ด๋ž€ multi-layered neural networks ๊ณผ ๊ด€๋ จ๋œ ๊ธฐ์ˆ ๋“ค์„ ์ด์นญํ•˜๋Š” ์šฉ์–ด๋กœ์„œ, ๋ฐ์ดํ„ฐ์˜ ์–‘์ด ๊ธ‰์†ํ•˜๊ฒŒ ์ฆ๊ฐ€ํ•˜๋ฉฐ, ์‚ฌ์ „ ์ง€์‹๋“ค์ด ์ถ•์ ๋˜๊ณ , ํšจ์œจ์ ์ธ ํ•™์Šต ์•Œ๊ณ ๋ฆฌ์ฆ˜๋“ค์ด ๊ฐœ๋ฐœ๋˜๋ฉฐ, ๊ณ ๊ธ‰ ํ•˜๋“œ์›จ์–ด๋“ค์ด ๋งŒ๋“ค์–ด์ง์— ๋”ฐ๋ผ ๋น ๋ฅด๊ฒŒ ๋ณ€ํ™”ํ•˜๊ณ  ์žˆ๋‹ค. ํ˜„์žฌ ๋”ฅ ๋Ÿฌ๋‹์€ ๋Œ€๋ถ€๋ถ„์˜ ์ธ์‹ ๋ฌธ์ œ์—์„œ ์ตœ์ฒจ๋‹จ ๊ธฐ์ˆ ๋กœ ํ™œ์šฉ๋˜๊ณ  ์žˆ๋‹ค. ์—ฌ๋Ÿฌ ๋ ˆ์ด์–ด๋กœ ๊ตฌ์„ฑ๋œ ๊นŠ์€ ์‹ ๊ฒฝ๋ง์€ ๋งŽ์€ ์–‘์˜ ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ํ•™์Šตํ•˜๊ธฐ ๋•Œ๋ฌธ์—, ๋ฐฉ๋Œ€ํ•œ ํŒŒ๋ผ๋ฏธํ„ฐ ์ง‘ํ•ฉ ์†์—์„œ ์ข‹์€ ํ•ด๋ฅผ ํšจ์œจ์ ์œผ๋กœ ์ฐพ์•„๋‚ด๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•˜๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ๊นŠ์€ ์‹ ๊ฒฝ๋ง์˜ ์„ธ ๊ฐ€์ง€ ์ด์Šˆ์— ๋Œ€ํ•ด ์ ‘๊ทผํ•˜๋ฉฐ, ๊ทธ๊ฒƒ๋“ค์„ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•œ regularization ๊ธฐ๋ฒ•๋“ค์„ ์ œ์•ˆํ•œ๋‹ค. ์ฒซ์งธ๋กœ, ์‹ ๊ฒฝ๋ง ๊ตฌ์กฐ๋Š” adversarial perturbations ์ด๋ผ๋Š” ๋‚ด์žฌ์ ์ธ blind spots ๋“ค์— ๋งŽ์ด ๋…ธ์ถœ๋˜์–ด ์žˆ๋‹ค. ์ด๋Ÿฌํ•œ adversarial perturbations ์— ๊ฐ•์ธํ•œ ์‹ ๊ฒฝ๋ง์„ ๋งŒ๋“ค๊ธฐ ์œ„ํ•˜์—ฌ, ํ•™์Šต ์ƒ˜ํ”Œ๊ณผ ๊ทธ๊ฒƒ์˜ adversarial perturbations ์™€์˜ ์ฐจ์ด๋ฅผ ์ตœ์†Œํ™”ํ•˜๋Š” manifold loss term์„ ๋ชฉ์  ํ•จ์ˆ˜์— ์ถ”๊ฐ€ํ•˜์˜€๋‹ค. ๋‘˜์งธ๋กœ, restricted Boltzmann machines ์˜ ํ•™์Šต์— ์žˆ์–ด์„œ, ์ƒ๋Œ€์ ์œผ๋กœ ์ž‘์€ ํฌ๊ธฐ๋ฅผ ๊ฐ€์ง€๋Š” ํด๋ž˜์Šค๋ฅผ ํ•™์Šตํ•˜๋Š” ๋ฐ์— ๊ธฐ์กด์˜ contrastive divergence ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ํ•œ๊ณ„์ ์„ ๊ฐ€์ง€๊ณ  ์žˆ์—ˆ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์ž‘์€ ํด๋ž˜์Šค์— ๋” ๋†’์€ ํ•™์Šต ๊ฐ€์ค‘์น˜๋ฅผ ๋ถ€์—ฌํ•˜๋Š” boosting ๊ฐœ๋…๊ณผ categorical features๋ฅผ ๊ฐ€์ง„ ๋ฐ์ดํ„ฐ์— ์ ํ•ฉํ•œ ์ƒˆ๋กœ์šด regularization ๊ธฐ๋ฒ•์„ ์กฐํ•ฉํ•˜์—ฌ ๊ธฐ์กด์˜ ํ•œ๊ณ„์ ์— ์ ‘๊ทผํ•˜์˜€๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ, ์‹ ๊ฒฝ๋ง์˜ ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ํ•™์Šตํ•˜๊ธฐ์— ์ถฉ๋ถ„ํ•˜์ง€ ์•Š์€ ๋ฐ์ดํ„ฐ๊ฐ€ ์ฃผ์–ด์ง„ ๊ฒฝ์šฐ, ๋” ์ •๊ตํ•œ data augmentation ๊ธฐ๋ฒ•์„ ๋‹ค๋ฃฌ๋‹ค. ์ƒ˜ํ”Œ์˜ ์ฐจ์›์ด ๋งŽ์„์ˆ˜๋ก, ๋ฐ์ดํ„ฐ ์ƒ์„ฑ์˜ ๊ธฐ์ €์— ๊น”๋ ค์žˆ๋Š” ์‚ฌ์ „ ์ง€์‹์„ ํ™œ์šฉํ•˜์—ฌ augmentation์„ ํ•˜๋Š” ๊ฒƒ์ด ๋”์šฑ ๋” ํ•„์š”ํ•˜๋‹ค. ๋‚˜์•„๊ฐ€, ๋ณธ ๋…ผ๋ฌธ์€ junction splicing signals ํ•™์Šต์„ ์œ„ํ•œ ์ฒซ ๋ฒˆ์งธ ๊นŠ์€ ์‹ ๊ฒฝ๋ง ๋ชจ๋ธ๋ง ๊ฒฐ๊ณผ๋ฅผ ์ œ์‹œํ•˜๊ณ  ์žˆ๋‹ค. Junction prediction ๋ฌธ์ œ๋Š” positive ์ƒ˜ํ”Œ ์ˆ˜๊ฐ€ ๋งค์šฐ ์ ์–ด ํŒจํ„ด ๋ชจ๋ธ๋ง์ด ํž˜๋“ค๋ฉฐ, ์ด๋Š” ์ƒ๋ช…์ •๋ณดํ•™ ๋ถ„์•ผ์—์„œ ๊ฐ€์žฅ ์ค‘์š”ํ•œ ๋ฌธ์ œ ์ค‘ ํ•˜๋‚˜๋กœ์„œ, ์ „์ฒด gene expression process ๋ฅผ ์ดํ•ดํ•˜๋Š” ์ฒซ ๊ฑธ์Œ์ด๋ผ๊ณ  ํ•  ์ˆ˜ ์žˆ๋‹ค. ์š”์•ฝํ•˜๋ฉด, ๋ณธ ๋…ผ๋ฌธ์€ ๋”ฅ ๋Ÿฌ๋‹์œผ๋กœ ์ด๋ฏธ์ง€์™€ ๋Œ€์šฉ๋Ÿ‰ ์œ ์ „์ฒด ๋ฐ์ดํ„ฐ๋ฅผ ์œ„ํ•œ ํšจ๊ณผ์ ์ธ ํ‘œํ˜„๋ฒ•์„ ํ•™์Šตํ•  ์ˆ˜ ์žˆ๋Š” regularization ๊ธฐ๋ฒ•๋“ค์„ ์ œ์•ˆํ•˜์˜€์œผ๋ฉฐ, ์œ ๋ช…ํ•œ ๋ฒค์น˜๋งˆํฌ ๋ฐ์ดํ„ฐ์™€ biomedical imaging ๋ฐ์ดํ„ฐ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๊ทธ ์‹คํšจ์„ฑ์„ ๊ฒ€์ฆํ•˜์˜€๋‹ค.Recent advances in machine learning continue to bring us closer to artificial intelligence. In particular, deep learning plays a key role in cutting-edge frameworks such as autonomous driving and game playing. Deep learning refers to a class of multi-layered neural networks, which is rapidly evolving as the amount of data increases, prior knowledge builds up, efficient training schemes are being developed, and high-end hardwares are being build. Currently, deep learning is a state-of-the-art technique for most recognition tasks. As deep neural networks learn many parameters, there has been a variety of attempts to obtain reasonable solutions over a wide search space. In this dissertation, three issues in deep learning are discussed and approaches to solve them with regularization techniques are suggested. First, deep neural networks expose the problem of intrinsic blind spots called adversarial perturbations. Thus, we must construct neural networks that resist the directions of adversarial perturbations by introducing an explicit loss term to minimize the differences between the original and adversarial samples. Second, training restricted Boltzmann machines show limited performance when handling minority samples in class-imbalanced datasets. Our approach addresses this limitation and is combined with a new regularization concept for datasets that have categorical features. Lastly, insufficient data handling is required to be more sophisticated when deep networks learn numerous parameters. Given high-dimensional samples, we must augment datasets with adequate prior knowledge to estimate a high-dimensional distribution. Furthermore, this dissertation shows the first application of deep belief networks to identifying junction splicing signals. Junction prediction is one of the major problems in the field of bioinformatics, and is a starting point to understanding the entire gene expression process. In summary, this dissertation proposes a set of deep learning regularization schemes that can learn the meaningful representation underlying large-scale genomic datasets and image datasets. The effectiveness of these methods was confirmed with a number of experimental studies.Chapter 1 Introduction 1 1.1 Deep neural networks 1 1.2 Issue 1: adversarial examples handling 3 1.3 Issue 2: class-imbalance handling 5 1.4 Issue 3: insufficient data handling 5 1.5 Organization 6 Chapter 2 Background 10 2.1 Basic operations for deep networks 10 2.2 History of deep networks 12 2.3 Modern deep networks 14 2.3.1 Contrastive divergence 16 2.3.2 Deep manifold learning 18 Chapter 3 Adversarial examples handling 20 3.1 Introduction 20 3.2 Methods 21 3.2.1 Manifold regularized networks 21 3.2.2 Generation of adversarial examples 25 3.3 Results and discussion 26 3.3.1 Improved classification performance 28 3.3.2 Disentanglement and generalization 30 3.4 Summary 33 Chapter 4 Class-imbalance handling 35 4.1 Introduction 35 4.1.1 Numerical interpretation of DNA sequences 37 4.1.2 Review of junction prediction problem 41 4.2 Methods 44 4.2.1 Boosted contrastive divergence with categorical gradients 44 4.2.2 Stacking and fine-tuning 46 4.2.3 Initialization and parameter setting 47 4.3 Results and discussion 47 4.3.1 Experiment preparation 47 4.3.2 Improved prediction performance and runtime 49 4.3.3 More robust prediction by proposed approach 51 4.3.4 Effects of regularization on performance 53 4.3.5 Efficient RBM training by boosted CD 54 4.3.6 Identification of non-canonical splice sites 57 4.4 Summary 58 Chapter 5 Insufficient data handling 60 5.1 Introduction 60 5.2 Backgrounds 62 5.2.1 Understanding comets 62 5.2.2 Assessing DNA damage from tail shape 65 5.2.3 Related image processing techniques 66 5.3 Methods 68 5.3.1 Preprocessing 70 5.3.2 Binarization 70 5.3.3 Filtering and overlap correction 72 5.3.4 Characterization and classification 75 5.4 Results and discussion 76 5.4.1 Test data preparation 76 5.4.2 Binarization 77 5.4.3 Robust identification of comets 79 5.4.4 Classification 81 5.4.5 More accurate characterization by DeepComet 82 5.5 Summary 85 Chapter 6 Conclusion 87 6.1 Dissertation summary 87 6.2 Future work 89 Bibliography 91Docto

    Approximate backwards differentiation of gradient flow

    Full text link
    The gradient flow (GF) is an ODE for which its explicit Euler's discretization is the gradient descent method. In this work, we investigate a family of methods derived from \emph{approximate implicit discretizations} of (\GF), drawing the connection between larger stability regions and less sensitive hyperparameter tuning. We focus on the implicit ฯ„\tau-step backwards differentiation formulas (BDFs), approximated in an inner loop with a few iterations of vanilla gradient descent, and give their convergence rate when the objective function is convex, strongly convex, or nonconvex. Numerical experiments show the wide range of effects of these different methods on extremely poorly conditioned problems, especially those brought about in training deep neural networks
    • โ€ฆ
    corecore