TensorFlow 설치 및 예제 실행 (uBuntu)


2015.11.9일에 공개된 오픈프로젝트 TensorFlow는 Google에서 공개한 DeepLearning을 위한 2세대 system이다.

TensorFlow의 내용은 이전 Post를 참조 한다.

이 시스템의 설치와 실행을 다룬다.



Python VirtualEnv에 기반한 설치 방법


필자가 사용하는 Desktop PC의 환경은 아래와 같다.


설치환경

uBuntu 14.04 64bit (LTS)

VMware based

python 2.7 (설치법 참조)


필자는 VirtualEnv-based Installation을 수행한다(VirtualEnv 설치).

VirtualEnv 환경을 python 자체가 지원하므로 고립된 수행 환경을 생성할 수 있는 이점이 있다. 문제 발생시에도 모든 사람이 같은 상황이므로 해결방법을 그대로 적용할 수 있는 장점이 있다.

#Next, set up a new virtualenv environment. To set it up in the directory ~/tensorflow, run:
$ virtualenv --system-site-packages ~/tensorflow
$ cd ~/tensorflow

tensorflow는 이제 가상환경이 저장되는 디렉터리가 된다.

--system-site-packages 옵션의 의미는 가상환경에게 global site-package의 접근 권한을 허용 하는 것이다.

아래의 경로에 해당 가상환경에만 적용되는 site-package들이 저장되어 있다.

~/tensorflow/lib/python2.7/site-packages

아래의 경로에는 해당 가상환경에서 사용 할 수 있는 python의 종류이다.

~/tensorflow/lib/


virtualenv를 활성화 시킨다.

$ source (설치경로)/bin/activate  # If using bash
source ./tensorflow/bin/activate #나의 경우 

비활성화는 deactivate라고 치면된다.


만약 csh 쉘을 쓴다면 아래의 명령어이다.

$ source bin/activate.csh  # If using csh

최종적으로 아래처럼 프롬프트 창의 이름 앞에 (tensorflow)가 들어가는 것을 볼 수 있다.

(tensorflow)root@jemin-virtual-machine:

실행된 virtualenv의 환경에서 TensorFlow를 설치한다.

설치 종류는 CPU-only version과 GPU-enabled version이 있으므로 자신이 원하는것을 선택하자.

필자는 일단 CUDA sdk를 설치하지 않았으므로 CPU-only를 설치했다.

# For CPU-only version
$ pip install https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.5.0-cp27-none-linux_x86_64.whl

# For GPU-enabled version (only install this version if you have the CUDA sdk installed)
$ pip install https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.5.0-cp27-none-linux_x86_64.whl

이렇게까지 하면 tensorFlow는 설치가 된것이다.

이제 간단한 예제를 실행해보자.


Hello, TensorFlow

Hello, TensorFlow
Python 2.7.6 (default, Jun 22 2015, 17:58:13) 
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> hello = tf.constant("Hello, TensorFlow!")
>>> sess = tf.Session()
I tensorflow/core/common_runtime/local_device.cc:25] Local device intra op parallelism threads: 8
I tensorflow/core/common_runtime/local_session.cc:45] Local session inter op parallelism threads: 8
>>> print sess.run(hello)
Hello, TensorFlow!
>>> a = tf.constant(10)
>>> b = tf.constant(32)
>>> print sess.run(a+b)
42
(tensorflow)root@jemin-virtual-machine:~/tensorflow# 
위와 같이 실행되면, 정상적으로 설치가 된것이다.


삼차원 데이터를 100개 생성하고 그것에 딱 맞는 초평면을 생성하는 예제이다.
# Make 100 phony data points in NumPy.
x_data = np.float32(np.random.rand(2, 100)) # Random input
y_data = np.dot([0.100, 0.200], x_data) + 0.300

# Construct a linear model.
b = tf.Variable(tf.zeros([1]))
W = tf.Variable(tf.random_uniform([1, 2], -1.0, 1.0))
y = tf.matmul(W, x_data) + b

# Minimize the squared errors.
loss = tf.reduce_mean(tf.square(y - y_data))
optimizer = tf.train.GradientDescentOptimizer(0.5)
train = optimizer.minimize(loss)

# For initializing the variables.
init = tf.initialize_all_variables()

# Launch the graph
sess = tf.Session()
sess.run(init)

# Fit the plane.
for step in xrange(0, 201):
    sess.run(train)
    if step % 20 == 0:
        print step, sess.run(W), sess.run(b)

# Learns best fit is W: [[0.100  0.200]], b: [0.300]
I tensorflow/core/common_runtime/local_device.cc:25] Local device intra op parallelism threads: 8
I tensorflow/core/common_runtime/local_session.cc:45] Local session inter op parallelism threads: 8
Hello, TensorFlow!
42
0 [[ 0.3138563   0.52770293]] [-0.0221094]
20 [[ 0.17014106  0.30057704]] [ 0.20599112]
40 [[ 0.12187628  0.23032692]] [ 0.27126268]
60 [[ 0.10676718  0.20918694]] [ 0.29121917]
80 [[ 0.10208294  0.2027912 ]] [ 0.29731771]
100 [[ 0.10063919  0.20084961]] [ 0.29918078]
120 [[ 0.10019579  0.20025893]] [ 0.29974979]
140 [[ 0.1000599   0.20007896]] [ 0.2999236]
160 [[ 0.10001831  0.2000241 ]] [ 0.29997668]
180 [[ 0.1000056   0.20000736]] [ 0.29999286]
200 [[ 0.1000017   0.20000222]] [ 0.29999784]

위와 같이 점차 반복수행 하면서, best fit인 0.1, 0.2, 0.3에 근접하는 것을 알 수 있다.



신경망 모델링 실행

이제 TensorFlow를 이용해서 이제 DeepLearning 예제를 실행해 보자.

실행할 예제는 신경망 모델링에 관한것이다.


일단 gitHub에서 TensorFlow 소스코드를 복제(clone)해와야 한다.

tensorflow)root@jemin-virtual-machine: git clone https://github.com/tensorflow/tensorflow.git

정상적으로 다운이 완료 되면, 예제를 실행 해보자.

이 예제는 "LeNet -5 - like convolutional MNIST model"이라고 한다.

(tensorflow)$ cd tensorflow/models/image/mnist
(tensorflow)$ python convolutional.py



실행결과

Initialized!
Epoch 0.00
Minibatch loss: 12.054, learning rate: 0.010000
Minibatch error: 90.6%
Validation error: 84.6%
Epoch 0.12
Minibatch loss: 3.285, learning rate: 0.010000
Minibatch error: 6.2%
Validation error: 7.0%
Epoch 0.23
Minibatch loss: 3.473, learning rate: 0.010000
Minibatch error: 10.9%
Validation error: 3.7%
Epoch 0.35
Minibatch loss: 3.221, learning rate: 0.010000
Minibatch error: 4.7%
Validation error: 3.2%
Epoch 0.47
Minibatch loss: 3.193, learning rate: 0.010000
Minibatch error: 4.7%
Validation error: 2.7%
Epoch 0.58
Minibatch loss: 3.301, learning rate: 0.010000
Minibatch error: 9.4%
Validation error: 2.5%
Epoch 0.70
Minibatch loss: 3.203, learning rate: 0.010000
Minibatch error: 6.2%
Validation error: 2.8%
Epoch 0.81
Minibatch loss: 3.022, learning rate: 0.010000
Minibatch error: 4.7%
Validation error: 2.6%
Epoch 0.93
Minibatch loss: 3.131, learning rate: 0.010000
Minibatch error: 6.2%
Validation error: 2.1%
Epoch 1.05
Minibatch loss: 2.954, learning rate: 0.009500
Minibatch error: 3.1%
Validation error: 1.6%
Epoch 1.16
Minibatch loss: 2.854, learning rate: 0.009500
Minibatch error: 0.0%
Validation error: 1.8%
Epoch 1.28
Minibatch loss: 2.825, learning rate: 0.009500
Minibatch error: 1.6%
Validation error: 1.4%
Epoch 1.40
Minibatch loss: 2.938, learning rate: 0.009500
Minibatch error: 7.8%
Validation error: 1.5%
Epoch 1.51
Minibatch loss: 2.767, learning rate: 0.009500
Minibatch error: 0.0%
Validation error: 1.9%
Epoch 1.63
Minibatch loss: 2.771, learning rate: 0.009500
Minibatch error: 3.1%
Validation error: 1.4%
Epoch 1.75
Minibatch loss: 2.844, learning rate: 0.009500
Minibatch error: 4.7%
Validation error: 1.2%
Epoch 1.86
Minibatch loss: 2.694, learning rate: 0.009500
Minibatch error: 0.0%
Validation error: 1.3%
Epoch 1.98
Minibatch loss: 2.650, learning rate: 0.009500
Minibatch error: 0.0%
Validation error: 1.5%
Epoch 2.09
Minibatch loss: 2.667, learning rate: 0.009025
Minibatch error: 1.6%
Validation error: 1.4%
Epoch 2.21
Minibatch loss: 2.658, learning rate: 0.009025
Minibatch error: 1.6%
Validation error: 1.2%
Epoch 2.33
Minibatch loss: 2.640, learning rate: 0.009025
Minibatch error: 3.1%
Validation error: 1.2%
Epoch 2.44
Minibatch loss: 2.579, learning rate: 0.009025
Minibatch error: 1.6%
Validation error: 1.1%
Epoch 2.56
Minibatch loss: 2.568, learning rate: 0.009025
Minibatch error: 0.0%
Validation error: 1.2%
Epoch 2.68
Minibatch loss: 2.554, learning rate: 0.009025
Minibatch error: 1.6%
Validation error: 1.1%
Epoch 2.79
Minibatch loss: 2.503, learning rate: 0.009025
Minibatch error: 0.0%
Validation error: 1.2%
Epoch 2.91
Minibatch loss: 2.487, learning rate: 0.009025
Minibatch error: 0.0%
Validation error: 1.2%
Epoch 3.03
Minibatch loss: 2.463, learning rate: 0.008574
Minibatch error: 1.6%
Validation error: 1.2%
Epoch 3.14
Minibatch loss: 2.458, learning rate: 0.008574
Minibatch error: 1.6%
Validation error: 1.1%
Epoch 3.26
Minibatch loss: 2.410, learning rate: 0.008574
Minibatch error: 0.0%
Validation error: 1.4%
Epoch 3.37
Minibatch loss: 2.496, learning rate: 0.008574
Minibatch error: 3.1%
Validation error: 1.3%
Epoch 3.49
Minibatch loss: 2.399, learning rate: 0.008574
Minibatch error: 1.6%
Validation error: 1.1%
Epoch 3.61
Minibatch loss: 2.377, learning rate: 0.008574
Minibatch error: 0.0%
Validation error: 1.1%
Epoch 3.72
Minibatch loss: 2.333, learning rate: 0.008574
Minibatch error: 0.0%
Validation error: 1.1%
Epoch 3.84
Minibatch loss: 2.312, learning rate: 0.008574
Minibatch error: 0.0%
Validation error: 1.2%
Epoch 3.96
Minibatch loss: 2.300, learning rate: 0.008574
Minibatch error: 1.6%
Validation error: 1.1%
Epoch 4.07
Minibatch loss: 2.276, learning rate: 0.008145
Minibatch error: 0.0%
Validation error: 1.1%
Epoch 4.19
Minibatch loss: 2.250, learning rate: 0.008145
Minibatch error: 0.0%
Validation error: 1.0%
Epoch 4.31
Minibatch loss: 2.233, learning rate: 0.008145
Minibatch error: 0.0%
Validation error: 1.0%
Epoch 4.42
Minibatch loss: 2.217, learning rate: 0.008145
Minibatch error: 0.0%
Validation error: 0.9%
Epoch 4.54
Minibatch loss: 2.324, learning rate: 0.008145
Minibatch error: 3.1%
Validation error: 1.0%
Epoch 4.65
Minibatch loss: 2.212, learning rate: 0.008145
Minibatch error: 0.0%
Validation error: 1.0%
Epoch 4.77
Minibatch loss: 2.174, learning rate: 0.008145
Minibatch error: 0.0%
Validation error: 0.9%
Epoch 4.89
Minibatch loss: 2.211, learning rate: 0.008145
Minibatch error: 1.6%
Validation error: 1.0%
Epoch 5.00
Minibatch loss: 2.193, learning rate: 0.007738
Minibatch error: 1.6%
Validation error: 1.0%
Epoch 5.12
Minibatch loss: 2.148, learning rate: 0.007738
Minibatch error: 3.1%
Validation error: 1.0%
Epoch 5.24
Minibatch loss: 2.153, learning rate: 0.007738
Minibatch error: 3.1%
Validation error: 1.0%
Epoch 5.35
Minibatch loss: 2.111, learning rate: 0.007738
Minibatch error: 1.6%
Validation error: 0.9%
Epoch 5.47
Minibatch loss: 2.084, learning rate: 0.007738
Minibatch error: 1.6%
Validation error: 0.8%
Epoch 5.59
Minibatch loss: 2.054, learning rate: 0.007738
Minibatch error: 0.0%
Validation error: 1.0%
Epoch 5.70
Minibatch loss: 2.043, learning rate: 0.007738
Minibatch error: 0.0%
Validation error: 1.0%
Epoch 5.82
Minibatch loss: 2.134, learning rate: 0.007738
Minibatch error: 3.1%
Validation error: 1.0%
Epoch 5.93
Minibatch loss: 2.006, learning rate: 0.007738
Minibatch error: 0.0%
Validation error: 1.0%
Epoch 6.05
Minibatch loss: 2.048, learning rate: 0.007351
Minibatch error: 3.1%
Validation error: 0.9%
Epoch 6.17
Minibatch loss: 1.988, learning rate: 0.007351
Minibatch error: 0.0%
Validation error: 1.1%
Epoch 6.28
Minibatch loss: 1.957, learning rate: 0.007351
Minibatch error: 0.0%
Validation error: 0.8%
Epoch 6.40
Minibatch loss: 1.971, learning rate: 0.007351
Minibatch error: 0.0%
Validation error: 0.9%
Epoch 6.52
Minibatch loss: 1.927, learning rate: 0.007351
Minibatch error: 0.0%
Validation error: 0.9%
Epoch 6.63
Minibatch loss: 1.912, learning rate: 0.007351
Minibatch error: 0.0%
Validation error: 1.0%
Epoch 6.75
Minibatch loss: 1.901, learning rate: 0.007351
Minibatch error: 0.0%
Validation error: 0.8%
Epoch 6.87
Minibatch loss: 1.886, learning rate: 0.007351
Minibatch error: 0.0%
Validation error: 0.8%
Epoch 6.98
Minibatch loss: 1.894, learning rate: 0.007351
Minibatch error: 1.6%
Validation error: 1.0%
Epoch 7.10
Minibatch loss: 1.859, learning rate: 0.006983
Minibatch error: 0.0%
Validation error: 0.8%
Epoch 7.21
Minibatch loss: 1.844, learning rate: 0.006983
Minibatch error: 0.0%
Validation error: 0.9%
Epoch 7.33
Minibatch loss: 1.836, learning rate: 0.006983
Minibatch error: 0.0%
Validation error: 1.0%
Epoch 7.45
Minibatch loss: 1.887, learning rate: 0.006983
Minibatch error: 3.1%
Validation error: 0.9%
Epoch 7.56
Minibatch loss: 1.808, learning rate: 0.006983
Minibatch error: 0.0%
Validation error: 0.8%
Epoch 7.68
Minibatch loss: 1.822, learning rate: 0.006983
Minibatch error: 1.6%
Validation error: 0.9%
Epoch 7.80
Minibatch loss: 1.782, learning rate: 0.006983
Minibatch error: 0.0%
Validation error: 0.9%
Epoch 7.91
Minibatch loss: 1.772, learning rate: 0.006983
Minibatch error: 0.0%
Validation error: 0.9%
Epoch 8.03
Minibatch loss: 1.761, learning rate: 0.006634
Minibatch error: 0.0%
Validation error: 0.9%
Epoch 8.15
Minibatch loss: 1.773, learning rate: 0.006634
Minibatch error: 1.6%
Validation error: 0.9%
Epoch 8.26
Minibatch loss: 1.742, learning rate: 0.006634
Minibatch error: 0.0%
Validation error: 0.9%
Epoch 8.38
Minibatch loss: 1.744, learning rate: 0.006634
Minibatch error: 0.0%
Validation error: 0.9%
Epoch 8.49
Minibatch loss: 1.719, learning rate: 0.006634
Minibatch error: 0.0%
Validation error: 0.8%
Epoch 8.61
Minibatch loss: 1.700, learning rate: 0.006634
Minibatch error: 0.0%
Validation error: 0.9%
Epoch 8.73
Minibatch loss: 1.700, learning rate: 0.006634
Minibatch error: 0.0%
Validation error: 0.8%
Epoch 8.84
Minibatch loss: 1.801, learning rate: 0.006634
Minibatch error: 1.6%
Validation error: 0.8%
Epoch 8.96
Minibatch loss: 1.666, learning rate: 0.006634
Minibatch error: 0.0%
Validation error: 0.9%
Epoch 9.08
Minibatch loss: 1.666, learning rate: 0.006302
Minibatch error: 0.0%
Validation error: 0.9%
Epoch 9.19
Minibatch loss: 1.649, learning rate: 0.006302
Minibatch error: 0.0%
Validation error: 0.8%
Epoch 9.31
Minibatch loss: 1.676, learning rate: 0.006302
Minibatch error: 1.6%
Validation error: 0.8%
Epoch 9.43
Minibatch loss: 1.626, learning rate: 0.006302
Minibatch error: 0.0%
Validation error: 0.8%
Epoch 9.54
Minibatch loss: 1.621, learning rate: 0.006302
Minibatch error: 0.0%
Validation error: 0.9%
Epoch 9.66
Minibatch loss: 1.606, learning rate: 0.006302
Minibatch error: 0.0%
Validation error: 0.8%
Epoch 9.77
Minibatch loss: 1.596, learning rate: 0.006302
Minibatch error: 0.0%
Validation error: 0.9%
Epoch 9.89
Minibatch loss: 1.602, learning rate: 0.006302
Minibatch error: 0.0%
Validation error: 0.9%
Test error: 0.8%

실행 시간은 아래와 같다.

- MNIST -

real 26m26.382s

user 77m58.714s

sys 19m21.539s


가상화 환경에서 CPU만 가지고 처리해서 오래걸린것 같다.

그래도 i7 4세대 프로세서, ram 24gb, ssd pro 256gb에서 실행한것인데, 대략 Android 5.0 컴파일 시간의 반 정도의 시간이 걸린것 같다.








+ Recent posts