1,547 research outputs found

    Harvesting Green Energy from Blue Ocean in Taiwan: Patent Mapping and Regulation Analyzing

    Get PDF
    Taiwan is an island with abundant oceanic resources but devoid of resources to significantly utilize ocean power. In fact, the Taiwanese government has initiated several renewable energy policies to transform its energy supply structure from brown (fossil fuel-based) sources of energy to green (renewable-based) energy. In addition, in the 4th National Energy Conference held in 2015, ocean energy was identified as a key contributor to renewable energy source. Therefore, the Taiwanese government proposed the construction of a MW-scale demonstration electricity plant, powered by ocean energy, as promptly as possible. Compared with solar PV, wind, and biomass (waste) energy, the development of ocean energy in Taiwan has lagged behind. Therefore, the aim of this chapter is to boost ocean energy adaptation using analysis from technical and legal perspectives. This chapter first illustrates the ocean energy potential and develop blueprint in Taiwan. Next, through patent research from the Taiwan Patent Search System, this chapter identifies advantageous ocean power technologies innovated by Taiwanese companies, primarily wave and current technologies. Furthermore, through the examination of regulations and competent authorities, this chapter discusses the possible challenges for implementing ocean energy technologies in Taiwan

    Efficient Neural Network Robustness Certification with General Activation Functions

    Full text link
    Finding minimum distortion of adversarial examples and thus certifying robustness in neural network classifiers for given data points is known to be a challenging problem. Nevertheless, recently it has been shown to be possible to give a non-trivial certified lower bound of minimum adversarial distortion, and some recent progress has been made towards this direction by exploiting the piece-wise linear nature of ReLU activations. However, a generic robustness certification for general activation functions still remains largely unexplored. To address this issue, in this paper we introduce CROWN, a general framework to certify robustness of neural networks with general activation functions for given input data points. The novelty in our algorithm consists of bounding a given activation function with linear and quadratic functions, hence allowing it to tackle general activation functions including but not limited to four popular choices: ReLU, tanh, sigmoid and arctan. In addition, we facilitate the search for a tighter certified lower bound by adaptively selecting appropriate surrogates for each neuron activation. Experimental results show that CROWN on ReLU networks can notably improve the certified lower bounds compared to the current state-of-the-art algorithm Fast-Lin, while having comparable computational efficiency. Furthermore, CROWN also demonstrates its effectiveness and flexibility on networks with general activation functions, including tanh, sigmoid and arctan.Comment: Accepted by NIPS 2018. Huan Zhang and Tsui-Wei Weng contributed equall

    Building a 3.5 m prototype interferometer for the Q & A vacuum birefringence experiment and high precision ellipsometry

    Full text link
    We have built and tested a 3.5 m high-finesse Fabry-Perot prototype inteferometer with a precision ellipsometer for the QED test and axion search (Q & A) experiment. We use X-pendulum-double-pendulum suspension designs and automatic control schemes developed by the gravitational-wave detection community. Verdet constant and Cotton-Mouton constant of the air are measured as a test. Double modulation with polarization modulation 100 Hz and magnetic-field modulation 0.05 Hz gives 10^{-7} rad phase noise for a 44-minute integration.Comment: This draft has been presented in the 5th Edoardo Amaldi Conference on Gravitational Wave

    Towards Fast Computation of Certified Robustness for ReLU Networks

    Full text link
    Verifying the robustness property of a general Rectified Linear Unit (ReLU) network is an NP-complete problem [Katz, Barrett, Dill, Julian and Kochenderfer CAV17]. Although finding the exact minimum adversarial distortion is hard, giving a certified lower bound of the minimum distortion is possible. Current available methods of computing such a bound are either time-consuming or delivering low quality bounds that are too loose to be useful. In this paper, we exploit the special structure of ReLU networks and provide two computationally efficient algorithms Fast-Lin and Fast-Lip that are able to certify non-trivial lower bounds of minimum distortions, by bounding the ReLU units with appropriate linear functions Fast-Lin, or by bounding the local Lipschitz constant Fast-Lip. Experiments show that (1) our proposed methods deliver bounds close to (the gap is 2-3X) exact minimum distortion found by Reluplex in small MNIST networks while our algorithms are more than 10,000 times faster; (2) our methods deliver similar quality of bounds (the gap is within 35% and usually around 10%; sometimes our bounds are even better) for larger networks compared to the methods based on solving linear programming problems but our algorithms are 33-14,000 times faster; (3) our method is capable of solving large MNIST and CIFAR networks up to 7 layers with more than 10,000 neurons within tens of seconds on a single CPU core. In addition, we show that, in fact, there is no polynomial time algorithm that can approximately find the minimum 1\ell_1 adversarial distortion of a ReLU network with a 0.99lnn0.99\ln n approximation ratio unless NP\mathsf{NP}=P\mathsf{P}, where nn is the number of neurons in the network.Comment: Tsui-Wei Weng and Huan Zhang contributed equall
    corecore