์ธ๊ณต์ง€๋Šฅ ์ •๋ฆฌ [๋ณธ๋ก 8] :: ์ธ๊ณต์‹ ๊ฒฝ๋ง ์„ค๊ณ„ ์‹œ ๊ณ ๋ ค์‚ฌํ•ญ ์ •๋ฆฌ!
์ปดํ“จํ„ฐ๊ณผํ•™ (CS)/AI 2020. 2. 29. 19:20

์ธ๊ณต์‹ ๊ฒฝ๋ง ์„ค๊ณ„ ์‹œ ๊ณ ๋ ค์‚ฌํ•ญ Network topology ๋„คํŠธ์›Œํฌ์˜ ๋ชจ์–‘ (feed forward, feed backward) Activation function ์ถœ๋ ฅ์˜ ํ˜•ํƒœ Objectives ๋ถ„๋ฅ˜? ํšŒ๊ท€? Loss function, Error๋กœ ๋‚˜ํƒ€๋‚ผ ์ˆ˜ ์žˆ์Œ Optimizers weight update Generalization Overfitting ๋ฐฉ์ง€ 2. activation function ์ถœ๋ ฅ์˜ ํ˜•ํƒœ ๊ฒฐ์ • 1. one-hot vector ์—ฌ๋Ÿฌ ๊ฐ’ ์ค‘ ํ•˜๋‚˜์˜ ๊ฐ’๋งŒ ์ถœ๋ ฅ ex_ ์ˆซ์ž ์‹๋ณ„ 2. softmax function ํ•ด๋‹น ์ถœ๋ ฅ์ด ๋‚˜์˜ฌ ํ™•๋ฅ ๋กœ ํ‘œํ˜„ 3. objective function ๊ธฐํƒ€ ๋ชฉ์ ํ•จ์ˆ˜ Mean absolute error / mae Mean absolute percentag..

์ธ๊ณต์ง€๋Šฅ ์ •๋ฆฌ [๋ณธ๋ก 4] :: ๋”ฅ๋Ÿฌ๋‹์˜ ์‹œ์ž‘
์ปดํ“จํ„ฐ๊ณผํ•™ (CS)/AI 2020. 2. 1. 00:50

๋”ฅ๋Ÿฌ๋‹์˜ ์‹œ์ž‘ ์ด๋Ÿฌํ•œ multi-layer ์˜ forward-propagation ๊ณผ์ •์„ ์‹์œผ๋กœ ๋‚˜ํƒ€๋‚ด๋ณด๋ฉด, h1 = f(x11*w11+x12*w21) net = h1*w13+h2*w23 = f(x11*w11+x12*w21)*w13+f(x11*w12+x12*w22)*w23 ์—ฌ๊ธฐ์„œ f ์ฆ‰, activation fuction์ด linearํ•œ function์ด๋ผ๊ณ  ๊ฐ€์ •ํ•ด๋ณด์ž. ๊ทธ๋ ‡๋‹ค๋ฉด f(x) = ax์˜ ํ˜•ํƒœ์ด๋ฏ€๋กœ, net = x11*a(w11*w13+w12*w23)+x12*a(w21*w13+w22*w23) ์œผ๋กœ ๋‚˜ํƒ€๋‚ผ ์ˆ˜ ์žˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์ด๋Ÿฌํ•œ net์€ ๊ฐ€์žฅ ์ฒ˜์Œ์— ์ฃผ์–ด์ง„ input layer์—๋‹ค๊ฐ€ ์ƒ์ˆ˜๋ฅผ ๊ณฑํ•œ ๊ผด์ด๋ฏ€๋กœ one-layer๋กœ ๋‚˜ํƒ€๋‚ผ ์ˆ˜ ์žˆ๋‹ค. ์ฆ‰, ์—ฌ๋Ÿฌ ๊ฐœ์˜ layer๋ฅผ ๊ฑฐ์ณค์Œ์—๋„ ์‰ฌ์šด ๋ฌธ์ œ๋กœ ..

์ธ๊ณต์ง€๋Šฅ ์ •๋ฆฌ [๋ณธ๋ก 3] :: ํ•™์Šต (feat. weight์˜ ์กฐ์ •)
์ปดํ“จํ„ฐ๊ณผํ•™ (CS)/AI 2020. 1. 31. 16:24

weight์˜ ๋ณ€ํ™” 1. ๋žœ๋ค ์‹ค์Šต one-layer perceptron weight๋ฅผ ๋žœ๋ค์œผ๋กœ ํ•™์Šตํ•˜๋Š” ํผ์…‰ํŠธ๋ก  ๋งŒ๋“ค๊ธฐ input, weight ๊ฐฏ์ˆ˜๋Š” ์ž…๋ ฅ๋ฐ›๊ธฐ output์€ 1๊ฐœ๋กœ ๊ณ ์ • /* 2020-01-28 W.HE one-layer perceptron */ #include #include #include main() { /* variable set */ int input_num; float* input; float* w; float output = 0; float answer = 3; int try_num = 0; /* input input_num */ printf("enter number of inputs\n"); scanf_s("%d", &input_num); /* memory allocat..