A yet another brief introduction to neural networks
44
A yet another brief introduction to Neural Networks 菊池 悠太
-
Upload
yuta-kikuchi -
Category
Technology
-
view
3.937 -
download
6
description
研究室で過去に発表したもののだいぶ改訂版です.
Transcript of A yet another brief introduction to neural networks
- 1. A yet another brief introduction to Neural Networks
- 2. Neural Network x z y
- 3. x z y Neural Network
- 4. x z y 784 10 MNIST (28*28) [0.05, 0.05, 0.05, 0.40, 0.05, 0.05, 0.15, 0.05, 0.15, 0.05] 10 0 1 ... 9 28*28= 784
- 5. x z y Neuron j W1 j
- 6. j W1 j k k W2 x z y Neuron
- 7. x z y W1 W2
- 8. Neural Network - Forward propagation - Back propagation - Parameter update
- 9. Neural Network - Forward propagation - Back propagation - Parameter update
- 10. Forward Propagation x z y output x input y
- 11. x z y z = f(W1x + b1) f( ) tanh sigmoid f(x0) f(x1) f(x2) f(x3) f(x) =
- 12. x z y z = f(W1x + b1) f( ) tanh,sigmoid reLU, maxout... f(x0) f(x1) f(x2) f(x3) f(x) = f( )
- 13. x z y z = f(W1x + b1) f( )
- 14. x z y z = f(W1x + b1) f( ) f( ) tanh,sigmoid reLU, maxout... f(x0) f(x1) f(x2) f(x3) f(x) =
- 15. x z y ( ) y = (W2z + b2)
- 16. x z y ( ) y = (W2z + b2)
- 17. x z y - - - - y = (W2z + b2)
- 18. x z y (a) = a y = (W2z + b2)
- 19. [0:1] 0.0~1.0 (a) = 1 1+exp( a) Sigmoidx z y y = (W2z + b2)
- 20. x z y 01 Softmax sum( 0.2 0.7 0.1 )=1.0 (a) = exp(a) exp(a) y = (W2z + b2)
- 21. x z y (a) = 1 1+exp( a) element-wise Sigmoid [0:1] [0:1] [0:1] y = (W2z + b2)
- 22. ... 1 N ...
- 23. Forward Propagation x z y z = f(W1x + b1) y = (W2z + b2)
- 24. Neural Network - Forward propagation - Back propagation - Parameter update
- 25. Back Propagation x z y t NN
- 26. Back Propagation x z y t NN : z = f(W1x + b1) y = (W2z + b2)
- 27. Back Propagation x z y t E( ) = 1 2 y( ) t 2 E = K k=1 tk log yk E = t log y (1 t) log(1 y) E = K k=1 t log y + (1 t) log(1 y)
- 28. Back Propagation x z y t y = y t
- 29. Back Propagation x z y t z = WT 2 yf (az) y = y t
- 30. Back Propagation x z y t y = y t z = WT 2 yf (az)
- 31. Back Propagation x z y t En W2 = yzT En W1 = zxT
- 32. Back Propagation x z y t En W2 = yzT En W1 = zxT
- 33. Neural Network - Forward propagation - Back propagation - Parameter update
- 34. Update parameters x z y t W1 = W1 En W1 W2 = W2 En W2
- 35. Update parameters x z y t W1 = W1 En W1 W2 = W2 En W2 -Gradient Descent -Stochastic Gradient Descent -SGD with mini-batch
- 36. Neural Network
- 37. Forward Propagation x z y output x input y z = f(W1x + b1) y = (W2z + b2)
- 38. Back Propagation x z y t z = WT 2 yf (az) y = y t En W2 = yzT En W1 = zxT
- 39. Update parameters x z y t W1 = W1 En W1 W2 = W2 En W2
- 40. Autoencoder Neural Network 1. 2. 1 1 ()
- 41. Neural Network Autoencoder z = f(W1x + b1) y = (W2z + b2)
- 42. Neural Network Autoencoder y = (W2z + b2) (a) = 1 1+exp( a) element-wise Sigmoid () (0.0:, 1.0:)
- 43. Autoencoder
- 44. Denoising Autoencoder add noise denoise +