Домашнее задание 2.

Продолжаем знакомиться с библиотекой tensorflow.

Задание выполнил(а): Подчезерцев Алексей

In [3]:
import tensorflow as tf
print(tf.__version__)
import numpy as np
import matplotlib.pyplot as plt
import time
%matplotlib inline
1.14.0

Задание 1 -- tensorflow vs numpy (3 балла).

Сравните скорость работы функций над массивами в фреймворках tensorflow и numpy. Для этого реализуйте на нампае и тф'е следующее:

  • Сумму квадратов диагональных элементов квадратной матрицы. Например для матрицы $$ \begin{pmatrix} 1& 0& 5\\ -2& 8& 12\\ 4& 1& -5 \end{pmatrix} $$ такая сумма будет равна $1^2 + 8^2 + (-5)^2 = 90$.
  • Угол между векторами в n-мерном пространстве. Напомним, что он вычисляется по формуле $$ \arccos \cfrac{\left\langle x, y\right\rangle}{||x||\cdot ||y||} $$

  • Сумму элементов коммутатора квадратных матриц $A$ и $B$. Коммутатор матриц это матрица $C = AB - BA$.

Постройте графики зависимости времени выполнения операций от размера массивов (в логарифмическй шкале) для каждой задачи для tensorflow и numpy (три рисунка, по два графика на рисунок). Элементы матриц выбирайте случайным образом (через модуль tf.random и np.random соотвтетственно). Какой фреймворк оказывается быстрее? Как Вы думаете, почему?

Можете пользоваться образцом кода ниже.

Замечание. Графики должны быть опрятными! Подписывайте оси и единицы измерения, указывайте легенду. За неопрятные графики оценка за задание может быть снижена.

Подсказка. Функция time.time() возвращает время в секундах (с высокой точностью), прошедшее от 00:00 1 января 1970 года. Используйте её, чтобы посчитать, сколько длилось выполнение куска кода. Также вам могут пригодиться функции tf.linalg.norm, tf.diag_part, tf.acos, tf.matmul

In [0]:
def log_int_iterator(start, end, step):
    i = start
    last = None
    while i <= end:
        if int(i) != last:
            last = int(i)
            yield last
        i *= step
In [0]:
MAX_VALUE = 1
MIN_VALUE = 0
hist_n = np.array([])
hist_t_np = np.array([])
hist_t_tf = np.array([])

for N in log_int_iterator(1, 10000, 1.1):
    hist_n = np.append(hist_n, N)

    np.random.seed(42)
    x = np.random.random_sample(size=(N, N))
    start = time.time()
    _ = np.sum(x.diagonal() ** 2)
    hist_t_np = np.append(hist_t_np, time.time() - start)

    tf.reset_default_graph()
    with tf.Session() as sess:
        x = tf.random.uniform(shape=(N, N), minval=MIN_VALUE, maxval=MAX_VALUE, dtype=tf.float32, seed=42)
        x_res = sess.run(x)
        # нам не надо считать время генерации случайных чисел
        x_placeholder = tf.placeholder(tf.float32, (N, N))
        y = tf.math.reduce_sum(tf.square(tf.linalg.tensor_diag_part(x_placeholder)))
        start = time.time()
        sess.run(y, {x_placeholder:x_res})
        hist_t_tf = np.append(hist_t_tf, time.time() - start)


hist_t_np = hist_t_np * 1000
hist_t_tf = hist_t_tf * 1000
plt.figure(figsize=[16, 7])
plt.title('Зависимость времени вычисления "суммы квадратов диагональных элементов квадратной матрицы" для numpy и tensorflow')
plt.scatter(hist_n, hist_t_np, c='r', label='numpy')
plt.scatter(hist_n, hist_t_tf, c='b', label='tensorflow')
plt.legend(loc='upper left')
plt.xscale("log")
plt.ylim([min(hist_t_tf.min(), hist_t_np.min()), max(hist_t_tf.max(), hist_t_np.max())])
plt.xlabel("Размер матрицы, элементы")
plt.ylabel("Время вычисления, мс")
plt.show()
In [0]:
MAX_VALUE = 1
MIN_VALUE = 0
hist_n = np.array([])
hist_t_np = np.array([])
hist_t_tf = np.array([])

for N in log_int_iterator(1, 1000, 1.1):
    hist_n = np.append(hist_n, N)

    np.random.seed(42)
    x = np.random.random_sample(size=(N))
    y = np.random.random_sample(size=(N))
    
    start = time.time()
    _ = np.arccos(np.dot(x, y)/(np.linalg.norm(x) * np.linalg.norm(y)))
    hist_t_np = np.append(hist_t_np, time.time() - start)

    tf.reset_default_graph()
    with tf.Session() as sess:
        x = tf.random.uniform(shape=(N,), minval=MIN_VALUE, maxval=MAX_VALUE, dtype=tf.float32, seed=42)
        y = tf.random.uniform(shape=(N,), minval=MIN_VALUE, maxval=MAX_VALUE, dtype=tf.float32, seed=43)

        x_res, y_res = sess.run([x, y])
        x_placeholder = tf.placeholder(tf.float32, (N,))
        y_placeholder = tf.placeholder(tf.float32, (N,))
        res = tf.math.acos(tf.linalg.tensordot(x_placeholder, y_placeholder, 1)/(tf.linalg.norm(x_placeholder) * tf.linalg.norm(y_placeholder)))
        start = time.time()
        _ = sess.run(res, {x_placeholder:x_res, y_placeholder: y_res})
        hist_t_tf = np.append(hist_t_tf, time.time() - start)


hist_t_np = hist_t_np * 1000
hist_t_tf = hist_t_tf * 1000
plt.figure(figsize=[16, 7])
plt.title('Зависимость времени вычисления "угла между векторами в n мерном пространстве" для numpy и tensorflow')
plt.scatter(hist_n, hist_t_np, c='r', label='numpy')
plt.scatter(hist_n, hist_t_tf, c='b', label='tensorflow')
plt.legend(loc='upper left')
plt.xscale("log")
plt.ylim([min(hist_t_tf.min(), hist_t_np.min()), max(hist_t_tf.max(), hist_t_np.max())])
plt.xlabel("Размер матрицы, элементы")
plt.ylabel("Время вычисления, мс")
plt.show()
In [0]:
MAX_VALUE = 1
MIN_VALUE = 0
hist_n = np.array([])
hist_t_np = np.array([])
hist_t_tf = np.array([])

for N in log_int_iterator(1, 5000, 1.1):
    hist_n = np.append(hist_n, N)

    np.random.seed(42)
    A = np.random.random_sample(size=(N, N))
    B = np.random.random_sample(size=(N, N))
    
    start = time.time()
    _ = np.sum(np.dot(A,B) - np.dot(B,A))
    hist_t_np = np.append(hist_t_np, time.time() - start)

    tf.reset_default_graph()
    with tf.Session() as sess:
        x = tf.random.uniform(shape=(N, N), minval=MIN_VALUE, maxval=MAX_VALUE, dtype=tf.float32, seed=42)
        y = tf.random.uniform(shape=(N, N), minval=MIN_VALUE, maxval=MAX_VALUE, dtype=tf.float32, seed=43)
        A,B = sess.run([x,y])

        A_placeholder = tf.placeholder(tf.float32, (N,N))
        B_placeholder = tf.placeholder(tf.float32, (N,N))
        res = tf.math.reduce_sum(tf.matmul(A_placeholder,B_placeholder) - tf.matmul(B_placeholder,A_placeholder))
        start = time.time()
        _ = sess.run(res, {A_placeholder:A, B_placeholder:B})
        hist_t_tf = np.append(hist_t_tf, time.time() - start)


hist_t_np = hist_t_np * 1000
hist_t_tf = hist_t_tf * 1000
plt.figure(figsize=[16, 7])
plt.title('Зависимость времени вычисления "сумма элементов коммутатора квадратных матриц" для numpy и tensorflow')
plt.scatter(hist_n, hist_t_np, c='r', label='numpy')
plt.scatter(hist_n, hist_t_tf, c='b', label='tensorflow')
plt.legend(loc='upper left')
plt.xscale("log")
plt.ylim([min(hist_t_tf.min(), hist_t_np.min()), max(hist_t_tf.max(), hist_t_np.max())])
plt.xlabel("Размер матрицы, элементы")
plt.ylabel("Время вычисления, мс")
plt.show()

Время выполнения кода на tensorflow несколько больше соотвествующего кода на numpy для задач поиска угла между векторами и суммы квадртов диагонали матриц, однако tensorflow быстрее находит сумму коммутатора квадратных матриц начиная примерно с $10^3$.

Стоит отметить, что необходимо правильно настроить вычисления на tensorflow $-$ сбрасывать сессии, графы высичлений, использовать placeholder'ы, использовать float вместо int, не учитывать время создания случайных величин. Без всего этого время вычисления может быть очень большим и не превликательным для дальнейшей работы.

Задание 2 -- градиенты и оптимайзеры (3 балла).

Продолжим работать с датасетом MNIST с размером картинок 8х8.

In [0]:
from sklearn.datasets import load_digits

mnist = load_digits()

X, y = mnist.data, mnist.target

n_labels = len(np.unique(y))

Многие алгоритмы оптимизации имплементированы в tensorflow. В этом задании мы сравним их при одинаковых параметрах, а также переберём разные параметры для одного алгоритма.

Задание 2.1 (1.5 балла). Исследуйте вклад параметра momentum в методу tf.train.MomentumOptimizer. Для этого для разных значений momentum постройте графики значения функции потерь от номера итерации. При каких значениях momentum алгоритм сходится быстрее? Используйте learning_rate=0.01.

Замечание. В этом задании используется многоклассовая логистическая регрессия. Не меняйте код модели в ячейке ниже.

In [6]:
tf.reset_default_graph()

w = tf.Variable(np.ones((X.shape[1], n_labels)), dtype="float32")
X_input = tf.placeholder("float32", (None, X.shape[1]))
y_input = tf.placeholder("int32", (None,))

predicted = tf.nn.softmax(X_input @ w)
loss = tf.losses.log_loss(tf.one_hot(y_input, depth=n_labels), predicted)
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/losses/losses_impl.py:121: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
In [0]:
def train(X, y, train_op, batch_size=16):
    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        n_batch_train = len(X) // batch_size
        for epoch in range(1):
            loss_history = []
            for b in range(n_batch_train):
                _, loss_ = sess.run([train_op, loss], feed_dict={X_input: X[b*batch_size:(b+1)*batch_size],
                                                                 y_input: y[b*batch_size:(
                                                                     b+1)*batch_size]
                                                                 })
                loss_history.append(loss_)
    return loss_history
In [0]:
plt.figure(figsize=[16, 7])
for momentum in np.geomspace(0.8, 0.9, num=5):
    train_op = tf.train.MomentumOptimizer(
        learning_rate=0.01, momentum=momentum).minimize(loss)
    hist = train(X, y, train_op)
    plt.plot(hist, label="Momentum {:.5f}".format(momentum))
    print("Momentum {:.5f}\tSteps {}\t loss {}".format(momentum, len(hist), hist[-1]))

plt.title("Зависимость функции потерь для метода Momentum при различных momentum")
plt.yscale("logit")
plt.legend(loc='upper left')
plt.xlabel("# iteration")
plt.ylabel("Loss") 
plt.show()
Momentum 0.80000	Steps 112	 loss 0.08640135824680328
Momentum 0.82391	Steps 112	 loss 0.07979313284158707
Momentum 0.84853	Steps 112	 loss 0.06699873507022858
Momentum 0.87389	Steps 112	 loss 0.04864809662103653
Momentum 0.90000	Steps 112	 loss 0.07606975734233856

Ваш ответ: 0.87389

Задание 2.2 (0.5 баллa). Исследуйте вклад learning_rate. Для этого для разных значений learning_rate постройте графики значения функции потерь от номера итерации. При каких значениях длины шага градиентного спуска алгоритм сходится быстрее? Используйте параметр метод MomentumOptimizer с параметром, который вы считаете лучшим по итогам предыдущего задания.

In [0]:
MOMENTUM_BEST = 0.87389
plt.figure(figsize=[16, 7])
for learning_rate in np.geomspace(0.006, 0.015, num=10):
    train_op = tf.train.MomentumOptimizer(
        learning_rate=learning_rate, momentum=MOMENTUM_BEST).minimize(loss)
    hist = train(X, y, train_op)
    plt.plot(hist, label="learning_rate {:.5f}".format(learning_rate))
    print("learning_rate {:.5f}\tSteps {}\t loss {}".format(learning_rate, len(hist), hist[-1]))

plt.title("Зависимость функции потерь для метода Momentum при различных learning_rate")
# plt.yscale("logit")
plt.legend(loc='upper left')
plt.xlabel("# iteration")
plt.ylabel("Loss") 
plt.show()
learning_rate 0.00600	Steps 112	 loss 0.07103659957647324
learning_rate 0.00664	Steps 112	 loss 0.0611911416053772
learning_rate 0.00735	Steps 112	 loss 0.05344482511281967
learning_rate 0.00814	Steps 112	 loss 0.04877835884690285
learning_rate 0.00902	Steps 112	 loss 0.0472920760512352
learning_rate 0.00998	Steps 112	 loss 0.04860434681177139
learning_rate 0.01105	Steps 112	 loss 0.051771409809589386
learning_rate 0.01224	Steps 112	 loss 0.058923590928316116
learning_rate 0.01355	Steps 112	 loss 0.08953150361776352
learning_rate 0.01500	Steps 112	 loss 0.15866509079933167

Ваш ответ: 0.00814

Задание 2.3 (0.5 балла) Проделайте то же, что и в пункте выше, но используйте в качестве базового алгоритма оптимизации Adam с дефолтными параметрами.

In [0]:
plt.figure(figsize=[16, 7])
for learning_rate in np.geomspace(0.00975, 0.01304, num=10):
    train_op = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(loss)
    hist = train(X, y, train_op)
    plt.plot(hist, label="learning_rate {:.5f}".format(learning_rate))
    print("learning_rate {:.5f}\tSteps {}\t loss {}".format(learning_rate, len(hist), hist[-1]))

plt.title("Зависимость функции потерь для метода Adam при различных learning_rate")
# plt.yscale("logit")
plt.legend(loc='upper left')
plt.xlabel("# iteration")
plt.ylabel("Loss")
plt.show()
learning_rate 0.00975	Steps 112	 loss 0.075477734208107
learning_rate 0.01007	Steps 112	 loss 0.07318376004695892
learning_rate 0.01040	Steps 112	 loss 0.07012206315994263
learning_rate 0.01074	Steps 112	 loss 0.06696666777133942
learning_rate 0.01109	Steps 112	 loss 0.06567150354385376
learning_rate 0.01146	Steps 112	 loss 0.07153745740652084
learning_rate 0.01184	Steps 112	 loss 0.08953450620174408
learning_rate 0.01222	Steps 112	 loss 0.11064328998327255
learning_rate 0.01263	Steps 112	 loss 0.12888753414154053
learning_rate 0.01304	Steps 112	 loss 0.12920765578746796

Ваш ответ: 0.01109

Задание 2.4 (0.5 балла) Сравните алгоритмы Adam и Momentum для данной задачи. Какой показывает себя лучше?

In [8]:
MOMENTUM_BEST = 0.87389
LEARNING_RATE_ADAM = 0.01109
LEARNING_RATE_MOMENTUM = 0.00814
plt.figure(figsize=[16, 7])

train_op = tf.train.MomentumOptimizer(learning_rate=LEARNING_RATE_MOMENTUM, momentum=MOMENTUM_BEST).minimize(loss)
hist = train(X, y, train_op)
plt.plot(hist, label="MomentumOptimizer")
print("MomentumOptimizer \tSteps {}\t loss {}".format(len(hist), hist[-1]))

train_op = tf.train.AdamOptimizer(learning_rate=LEARNING_RATE_ADAM).minimize(loss)
hist = train(X, y, train_op)
plt.plot(hist, label="Adam")
print("Adam \t\t\tSteps {}\t loss {}".format(len(hist), hist[-1]))

plt.title("Зависимость функции потерь для метода Adam и Momentum для найденных гиперпараметров")
plt.legend(loc='upper left')
plt.xlabel("# iteration")
plt.ylabel("Loss")
plt.show()
MomentumOptimizer 	Steps 112	 loss 0.04878924414515495
Adam 			Steps 112	 loss 0.06567990034818649

Качество обоих алгоритмов примерно одинаковое, но итоговый результат лучше у MomentumOptimizer

Задание 3 -- наша первая нейросеть, часть 2 (4 балла).

В этом задании мы напишем нейросеть для работы с датасетом MNIST размера 28х28. Исользовать можно только полносвязные (dense) слои! Для этого мы "вытянем" картинки 28х28 в длинный вектор размера 784.

In [0]:
from mnist import load_dataset

X_train, y_train, X_test, y_test, _, _ = load_dataset()

X_train = X_train.reshape(len(X_train), -1)
X_test = X_test.reshape(len(X_test), -1)
In [12]:
for i in [228, 1437, 322, 420, 69]:
    plt.title(y_train[i])
    plt.imshow(X_train[i].reshape((28, 28)))
    plt.show()

Подберите архитектуру и алгоритм оптимизации так, чтобы значение accuracy на тестовой выборке было не менее 97.5.

In [28]:
from sklearn.metrics import accuracy_score
tf.reset_default_graph()
EPOCHS = 5

def train_and_validate(X_train, y_train, X_test, y_test, train_op, batch_size=16):
    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        n_batch_train = len(X_train) // batch_size
        n_batch_test = len(X_test) // batch_size
        for epoch in range(EPOCHS):
            loss_history_train = []
            for b in range(n_batch_train):
                _, loss_ = sess.run([train_op, loss], feed_dict={X_input: X_train[b*batch_size:(b+1)*batch_size],
                                                                 y_input: y_train[b*batch_size:(
                                                                     b+1)*batch_size]
                                                                 })
                loss_history_train.append(loss_)

        for epoch in range(1):
            loss_history_test = []
            prediction_history = []
            for b in range(n_batch_test):
                loss_, predicted_ = sess.run([loss, predicted], feed_dict={X_input: X_test[b*batch_size:(b+1)*batch_size],
                                                                           y_input: y_test[b*batch_size:(
                                                                               b+1)*batch_size]
                                                                           })
                loss_history_test.append(loss_)
                prediction_history += predicted_.argmax(-1).tolist()
            print("Test accuracy: ", accuracy_score(y_test, prediction_history))
    return loss_history_train, loss_history_test

for i in log_int_iterator(64, 1024, 2):
    tf.reset_default_graph()
    X_input = tf.placeholder("float32", (None, 784))  # dim = [batch_size, 784]
    y_input = tf.placeholder("int32", (None,))  # dim = [batch_size,]

    # <define architecture as a function of X_input>
    layer2 = tf.layers.dense(X_input, i, activation=tf.nn.relu)
    # layer2 = tf.layers.dense(layer1, 256, activation=tf.nn.relu)

    # <define 10-class outputs>
    logits =  tf.layers.dense(layer2, n_labels)
    predicted = tf.nn.softmax(logits)

    # <define log loss with one-hot vector of labels
    loss = tf.nn.softmax_cross_entropy_with_logits_v2(tf.one_hot(y_input, depth=n_labels), logits)
    # <define train operation here>
    # train_op = tf.train.AdamOptimizer(learning_rate=LEARNING_RATE_ADAM).minimize(loss)
    train_op = tf.train.AdamOptimizer().minimize(loss)
    # train_op = tf.train.MomentumOptimizer(learning_rate=LEARNING_RATE_MOMENTUM, momentum=MOMENTUM_BEST).minimize(loss)

    print("="*60, '\n', "Layer size:", i)
    loss_history_train, loss_history_test = train_and_validate(
        X_train, y_train, X_test, y_test, train_op)
WARNING:tensorflow:Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f29a0197e48>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f29a0197e48>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f29a0197e48>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f29a0197e48>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f29f2238da0>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f29f2238da0>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f29f2238da0>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f29f2238da0>>: AssertionError: Bad argument number for Name: 3, expecting 4
============================================================ 
 Layer size: 64
Test accuracy:  0.9703
WARNING:tensorflow:Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f29a02392b0>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f29a02392b0>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f29a02392b0>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f29a02392b0>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f2956a9cf28>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f2956a9cf28>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f2956a9cf28>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f2956a9cf28>>: AssertionError: Bad argument number for Name: 3, expecting 4
============================================================ 
 Layer size: 128
Test accuracy:  0.9735
WARNING:tensorflow:Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f29a0270048>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f29a0270048>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f29a0270048>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f29a0270048>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f295884d278>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f295884d278>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f295884d278>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f295884d278>>: AssertionError: Bad argument number for Name: 3, expecting 4
============================================================ 
 Layer size: 256
Test accuracy:  0.9762
WARNING:tensorflow:Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f29a8894b70>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f29a8894b70>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f29a8894b70>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f29a8894b70>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f29a8894b70>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f29a8894b70>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f29a8894b70>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f29a8894b70>>: AssertionError: Bad argument number for Name: 3, expecting 4
============================================================ 
 Layer size: 512
Test accuracy:  0.9773
WARNING:tensorflow:Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f295885c438>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f295885c438>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f295885c438>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f295885c438>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f295885c438>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f295885c438>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f295885c438>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f295885c438>>: AssertionError: Bad argument number for Name: 3, expecting 4
============================================================ 
 Layer size: 1024
Test accuracy:  0.9777

Наилучшим образом показал себя слой из 1024 нейронов, но и необходимое качество было достигнуто и с 256.

Необходимое качество было достигнуто с помошью AdamOptimizer со стандартными параметрами, подобранные значения в задании 2 давали меньшее качество

Количество эпох было увеличено до 5 для дообучения модели и подбора точных весов, дальнейшее увеличение не дает заметного изменения качества модели.