172 research outputs found

    Towards Flexible Time-to-event Modeling: Optimizing Neural Networks via Rank Regression

    Full text link
    Time-to-event analysis, also known as survival analysis, aims to predict the time of occurrence of an event, given a set of features. One of the major challenges in this area is dealing with censored data, which can make learning algorithms more complex. Traditional methods such as Cox's proportional hazards model and the accelerated failure time (AFT) model have been popular in this field, but they often require assumptions such as proportional hazards and linearity. In particular, the AFT models often require pre-specified parametric distributional assumptions. To improve predictive performance and alleviate strict assumptions, there have been many deep learning approaches for hazard-based models in recent years. However, representation learning for AFT has not been widely explored in the neural network literature, despite its simplicity and interpretability in comparison to hazard-focused methods. In this work, we introduce the Deep AFT Rank-regression model for Time-to-event prediction (DART). This model uses an objective function based on Gehan's rank statistic, which is efficient and reliable for representation learning. On top of eliminating the requirement to establish a baseline event time distribution, DART retains the advantages of directly predicting event time in standard AFT models. The proposed method is a semiparametric approach to AFT modeling that does not impose any distributional assumptions on the survival time distribution. This also eliminates the need for additional hyperparameters or complex model architectures, unlike existing neural network-based AFT models. Through quantitative analysis on various benchmark datasets, we have shown that DART has significant potential for modeling high-throughput censored time-to-event data.Comment: Accepted at ECAI 202

    描かれる舞姫、描かせる舞姫 : 崔承喜(1911~1969)における朝鮮文化の表象

    Get PDF
    学位の種別: 課程博士審査委員会委員 : (主査)東京大学教授 菅原 克也, 東京大学教授 今橋 映子, 東京大学准教授 前島 志保, 東京大学教授 月脚 達彦, 宇都宮大学教授 丁 貴連University of Tokyo(東京大学

    Synthesis of Silicon Nanoparticles and Nanowires by a Nontransferred Arc Plasma System

    Get PDF
    Silicon nanomaterials were synthesized from solid silicon powder in microsize using a nontransferred arc plasma system. Synthesized silicon nanomaterials were sphere or wire in morphology according to the input power of arc plasma, the flow rate of plasma forming gas, and the collecting position of product. The product was spherical nanoparticles at a high input power for complete evaporation, while it was nanowires at a relatively low input power. The mean diameter of synthesized silicon nanoparticles was increased from 20.52 nm to 40.01 nm by increasing the input power from 9 kW to 13 kW. On the other hand, the diameter of silicon nanowires was controllable by changing the flow rate of plasma forming gas. The mean diameter of silicon nanowires was increased from 16.69 nm to 23.03 nm by decreasing the plasma forming gas flow rate from 15 L/min to 12 L/min

    Probabilistic Integrated Urban Inundation Modeling Using Sequential Data Assimilation

    Full text link
    Urban inundation due to climate change and heavy rainfall is one of the most common natural disasters worldwide. However, it is still insufficient to obtain accurate urban inundation predictions due to various uncertainties coming from input forcing data, model parameters, and observations. Despite of numerous sophisticated data assimilation algorithms proposed to increase the certainty of predictions, there have been few attempts to combine data assimilation with integrated inundation models due to expensive computations and computational instability such as breach of conservation and momentum equations in the updating procedure. In this study, we propose a probabilistic integrated urban inundation modeling scheme using sequential data assimilation. The original integrated urban inundation model consists of a 2D inundation model on the ground surface and a 1D network model of sewer pipes, which are combined by a sub-model to exchange storm water between the ground surface and the sewerage system. In our method, uncertainties of modeling conditions are explicitly expressed by ensembles having different rainfall input, initial conditions, and model parameters. Then, particle filtering(PF), one of sequential data assimilation techniques for non-linear and non-Gaussian models, is applied to sequentially update model states and parameters when new observations are arrived from monitoring systems. Several synthetic experiments are implemented to demonstrate applicability of the proposed method in an urbanized area located in Osaka, Japan. The discussion will be focused on noise specification and updating methods in PF and comparison of accuracy between deterministic and probabilistic inundation modeling methods

    Prediction model for mechanical properties of lightweight aggregate concrete using artificial neural network

    Get PDF
    The mechanical properties of lightweight aggregate concrete (LWAC) depend on the mixing ratio of its binders, normal weight aggregate (NWA), and lightweight aggregate (LWA). To characterize the relation between various concrete components and the mechanical characteristics of LWAC, extensive studies have been conducted, proposing empirical equations using regression models based on their experimental results. However, these results obtained from laboratory experiments do not provide consistent prediction accuracy due to the complicated relation between materials and mix proportions, and a general prediction model is needed, considering several mix proportions and concrete constituents. This study adopts the artificial neural network (ANN) for modeling the complex and nonlinear relation between constituents and the resulting compressive strength and elastic modulus of LWAC. To construct a database for the ANN model, a vast amount of detailed and extensive data was collected from the literature including various mix proportions, material properties, and mechanical characteristics of concrete. The optimal ANN architecture is determined to enhance prediction accuracy in terms of the numbers of hidden layers and neurons. Using this database and the optimal ANN model, the performance of the ANN-based prediction model is evaluated in terms of the compressive strength and elastic modulus of LWAC. Furthermore, these prediction accuracies are compared to the results of previous ANN-based analyses, as well as those obtained from the commonly used linear and nonlinear regression models

    Concrete Crack Identification Using a UAV Incorporating Hybrid Image Processing

    Get PDF
    Crack assessment is an essential process in the maintenance of concrete structures. In general, concrete cracks are inspected by manual visual observation of the surface, which is intrinsically subjective as it depends on the experience of inspectors. Further, it is time-consuming, expensive, and often unsafe when inaccessible structural members are to be assessed. Unmanned aerial vehicle (UAV) technologies combined with digital image processing have recently been applied to crack assessment to overcome the drawbacks of manual visual inspection. However, identification of crack information in terms of width and length has not been fully explored in the UAV-based applications, because of the absence of distance measurement and tailored image processing. This paper presents a crack identification strategy that combines hybrid image processing with UAV technology. Equipped with a camera, an ultrasonic displacement sensor, and a WiFi module, the system provides the image of cracks and the associated working distance from a target structure on demand. The obtained information is subsequently processed by hybrid image binarization to estimate the crack width accurately while minimizing the loss of the crack length information. The proposed system has shown to successfully measure cracks thicker than 0.1 mm with the maximum length estimation error of 7.3%

    Parametric Study for Thermal and Catalytic Methane Pyrolysis for Hydrogen Production: Techno-Economic and Scenario Analysis

    Get PDF
    As many countries have tried to construct a hydrogen (H-2) society to escape the conventional energy paradigm by using fossil fuels, methane pyrolysis (MP) has received a lot of attention owing to its ability to produce H-2 with no CO2 emission. In this study, a techno-economic analysis including a process simulation, itemized cost estimation, and sensitivity and scenario analysis was conducted for the system of thermal-based and catalyst-based MP (TMP-S1 and CMP-S2), and the system with the additional H-2 production processes of carbon (C) gasification and water-gas shift (WGS) reaction (TMPG-S3 and CMPG-S4). Based on the technical performance expressed by H-2 and C production rate, the ratio of H-2 combusted to supply the heat required and the ratio of reactants for the gasifier (C, Air, and water (H2O)), unit H-2 production costs of USD 2.14, 3.66, 3.53, and 3.82 kgH(2)(-1) from TMP-S1, CMP-S2, TMPG-S3, and CMPG-S4, respectively, were obtained at 40% H-2 combusted and a reactants ratio for C-Air-H2O of 1:1:2. Moreover, trends of unit H-2 production cost were obtained and key economic parameters of the MP reactor, reactant, and C selling price were represented by sensitivity analysis. In particular, economic competitiveness compared with commercialized H-2 production methods was reported in the scenario analysis for the H-2 production scale and C selling price.</p&gt

    Look-up the Rainbow: Efficient Table-based Parallel Implementation of Rainbow Signature on 64-bit ARMv8 Processors

    Get PDF
    Rainbow signature is one of the finalist in National Institute of Standards and Technology (NIST) standardization. It is also the only signature candidate that is designed based on multivariate quadratic hard problem. Rainbow signature is known to have very small signature size compared to other post-quantum candidates. In this paper, we propose an efficient implementation technique to improve performance of Rainbow signature schemes. A parallel polynomial-multiplication on a 64-bit ARMv8 processor was proposed, wherein a look-up table was created by pre-calculating the 4×44\times4 multiplication results. This technique was developed based on the observation that the existing implementation of Rainbow\u27s polynomial-multiplication relies on the Karatsuba algorithm. It is not optimal due to the divide and conquer steps involved, whereby operations on F16\mathbb{F}_{16} are divided into many small sub-fields of F4\mathbb{F}_{4} and F2\mathbb{F}_{2}. Further investigations reveal that when the polynomial-multiplication in Rainbow signature is operated on F16\mathbb{F}_{16}, its operand is in 4-bit. Since the maximum combinations of a 4×44 \times 4 multiplication is only 256, we constructed a 256-byte look-up table. According to the 4-bit constant, only 16-byte is loaded from the table at one time. The time-consuming multiplication is replaced by performing the table look-up. In addition, it calculates up-to 16 result values per register using characteristics of vector registers available on 64-bit ARMv8 processor. With the proposed fast polynomial-multiplication technique, we implemented the optimized Rainbow III and V. These two parameter sets are performed on F256\mathbb{F}_{256}, but they use sub-field F16\mathbb{F}_{16} in the multiplication process. Therefore, the sub-field multiplication can be replaced with the proposed table look-up technique, which in turn omitted a significant number of operations. We have carried out the experiments on the Apple M1 processor, which shows up to 167.2×\times and 51.6×\times better performance enhancement at multiplier, and Rainbow signatures, respectively, compared to the previous implementation

    Grover on SPEEDY

    Get PDF
    With the advent of quantum computers, revisiting the security of cryptography has been an active research area in recent years. In this paper, we estimate the cost of applying Grover\u27s algorithm to SPEEDY block cipher. SPEEDY is a family of ultra-low-latency block ciphers presented in CHES\u2721. It is ensured that the key search equipped with Grover\u27s algorithm reduces the nn-bit security of the block cipher to n2\frac{n}{2}-bit. The issue is how many quantum resources are required for Grover\u27s algorithm to work. NIST estimates the post-quantum security strength for symmetric key cryptography as the cost of Grover key search algorithm. SPEEDY provides 128-bit security or 192-bit security depending on the number of rounds. Based on our estimated cost, we present that increasing the number of rounds is insufficient to satisfy the security against attacks on quantum computers. To the best of our knowledge, this is the first implementation of SPEEDY as a quantum circuit
    corecore