机器学习 - 04 基础两层神经网络

PyTorch是一个基于Python的科学计算库,它有以下特点:

  • 类似于NumPy,但是它可以使用GPU
  • 可以用它定义深度学习模型,可以灵活地进行深度学习模型的训练和使用

Tensor

Tensor类似于numpy的ndarray,唯一的区别是tensor可以在gpu上加速计算。

1
2
from __future__ import print_function
import torch

构造一个初始化5 * 3 的矩阵:

1
2
x = torch.empty(5, 3)
print(x)
tensor([[1.2584e-23, 4.5800e-41, 5.5303e-22],
        [4.5800e-41, 0.0000e+00, 0.0000e+00],
        [0.0000e+00, 0.0000e+00, 0.0000e+00],
        [0.0000e+00, 4.2603e-21, 1.4013e-45],
        [5.5193e-22, 4.5800e-41, 0.0000e+00]])

构建一个随机初始化的矩阵:

1
2
3
4
5
# rand 生成的都是正数,randn生成的数有正有负
x = torch.rand(5, 3)
print(x)
x = torch.randn(5, 3)
print(x)
tensor([[0.9556, 0.5058, 0.0706],
        [0.2486, 0.8324, 0.8027],
        [0.4677, 0.9293, 0.5427],
        [0.7236, 0.9158, 0.4789],
        [0.2861, 0.2748, 0.8777]])
tensor([[ 1.8956,  0.2862,  0.0363],
        [ 0.8295, -0.8973, -0.7135],
        [-0.7953,  0.7180, -0.0232],
        [-0.3304, -0.0801, -0.1258],
        [ 0.3730, -0.0505,  0.9129]])

构建一个全部为0,类型为long的矩阵:

1
2
x = torch.zeros(5, 3, dtype=torch.long)
print(x)
tensor([[0, 0, 0],
        [0, 0, 0],
        [0, 0, 0],
        [0, 0, 0],
        [0, 0, 0]])

从数据直接直接构建tensor:

1
2
x = torch.tensor([5.5, 3])
print(x)
tensor([5.5000, 3.0000])

也可以从一个已有的tensor构建一个tensor。这些方法会重用原来tensor的特征,例如,数据类型,除非提供新的数据。

1
2
3
4
5
6
7
8
# 构建新的矩阵的同时可以改变形状, 数据类型
x = x.new_ones(3, 5, dtype=torch.double) # new methodstake in size
print(x)
x = x.new_zeros(3, 5)
print(x)
# randn_like 只是沿用了之前的形状
x = torch.randn_like(x, dtype=torch.float) # override dtype
print(x)
tensor([[1., 1., 1., 1., 1.],
        [1., 1., 1., 1., 1.],
        [1., 1., 1., 1., 1.]], dtype=torch.float64)
tensor([[0., 0., 0., 0., 0.],
        [0., 0., 0., 0., 0.],
        [0., 0., 0., 0., 0.]], dtype=torch.float64)
tensor([[-0.3154, -0.2669, -0.0327, -0.0288,  0.6338],
        [ 1.8162,  1.5946, -1.3593, -0.8494,  1.5819],
        [-0.2012, -1.8754, -0.4515, -0.6153,  0.0842]])

得到tensor的形状:

1
2
3
#返回的是一个tuple, torch.Size是一个tuple
y = x.size()
print(y, type(y))
torch.Size([3, 5]) <class 'torch.Size'>

Attention

torch.Size is tuple

Operations

有很多种tensor运算。我们先介绍加法运算。

1
2
y = torch.rand(3, 5)
print(x + y)
tensor([[-0.1155,  0.1429,  0.2812,  0.0049,  1.1452],
        [ 1.8705,  2.3386, -1.1029,  0.1432,  2.5273],
        [ 0.2852, -1.7390,  0.1234, -0.5170,  0.1819]])

另一种着加法的写法

1
print(torch.add(x, y))
tensor([[-0.1155,  0.1429,  0.2812,  0.0049,  1.1452],
        [ 1.8705,  2.3386, -1.1029,  0.1432,  2.5273],
        [ 0.2852, -1.7390,  0.1234, -0.5170,  0.1819]])

加法:把输出作为一个变量

1
2
3
4
5
6
res = x + y
print(res)
# or
result = torch.empty(3, 5)
torch.add(x, y, out=result) # overwirte before tensor
print(result)
tensor([[-0.1155,  0.1429,  0.2812,  0.0049,  1.1452],
        [ 1.8705,  2.3386, -1.1029,  0.1432,  2.5273],
        [ 0.2852, -1.7390,  0.1234, -0.5170,  0.1819]])
tensor([[-0.1155,  0.1429,  0.2812,  0.0049,  1.1452],
        [ 1.8705,  2.3386, -1.1029,  0.1432,  2.5273],
        [ 0.2852, -1.7390,  0.1234, -0.5170,  0.1819]])

in-place加法

1
2
3
4
5
# add x to y
y.add(x)
print(y)
y.add_(x)
print(y)
tensor([[0.1999, 0.4098, 0.3139, 0.0337, 0.5115],
        [0.0543, 0.7440, 0.2564, 0.9927, 0.9454],
        [0.4864, 0.1365, 0.5750, 0.0983, 0.0976]])
tensor([[-0.1155,  0.1429,  0.2812,  0.0049,  1.1452],
        [ 1.8705,  2.3386, -1.1029,  0.1432,  2.5273],
        [ 0.2852, -1.7390,  0.1234, -0.5170,  0.1819]])

注意

任何in-place的运算都会以``_``结尾。 举例来说:``x.copy_(y)``, ``x.t_()``, 会改变 ``x``。

各种类似NumPy的indexing都可以在PyTorch tensor上面使用。

1
2
print(x)
print(x[:, 1]) # 表示所有列表的index 1
tensor([[-0.3154, -0.2669, -0.0327, -0.0288,  0.6338],
        [ 1.8162,  1.5946, -1.3593, -0.8494,  1.5819],
        [-0.2012, -1.8754, -0.4515, -0.6153,  0.0842]])
tensor([-0.2669,  1.5946, -1.8754])

Resizing: 如果你希望resize/reshape一个tensor,可以使用torch.view:

1
2
3
4
5
6
7
8
9
# 直接使用reshape返回的是一个新的tensor
print(x.reshape(5, 3))
print(x)

# resizing: -1表示自动帮你计算你需要的纬度,但是必须除尽
x = torch.randn(4, 4)
y = x.view(16)
z = x.view(-1, 8) # the size -1 is inferred from other dimensions
print(x.size(), y.size(), z.size())
tensor([[-0.3154, -0.2669, -0.0327],
        [-0.0288,  0.6338,  1.8162],
        [ 1.5946, -1.3593, -0.8494],
        [ 1.5819, -0.2012, -1.8754],
        [-0.4515, -0.6153,  0.0842]])
tensor([[-0.3154, -0.2669, -0.0327, -0.0288,  0.6338],
        [ 1.8162,  1.5946, -1.3593, -0.8494,  1.5819],
        [-0.2012, -1.8754, -0.4515, -0.6153,  0.0842]])
torch.Size([4, 4]) torch.Size([16]) torch.Size([2, 8])
1
2
3
4
x = torch.tensor([1])
print(x.item())
x = torch.randn(1)
print(x.item())
1
-0.4120529592037201

更多阅读

各种Tensor operations, 包括transposing, indexing, slicing,
mathematical operations, linear algebra, random numbers在
<https://pytorch.org/docs/torch>.

Numpy和Tensor之间的转化

在Torch Tensor和NumPy array之间相互转化非常容易。

Torch Tensor和NumPy array会共享内存,所以改变其中一项也会改变另一项。

把Torch Tensor转变成NumPy Array

1
2
a = torch.ones(5)
print(a)
tensor([1., 1., 1., 1., 1.])
1
2
b = a.numpy()
print(b)
[1. 1. 1. 1. 1.]

改变numpy array里面的值。

1
2
3
4
# 猜测:原数据地址都是一样的只是形式不同而已
a.add_(1)
print(a)
print(b)
tensor([2., 2., 2., 2., 2.])
[2. 2. 2. 2. 2.]

把NumPy ndarray转成Torch Tensor

1
2
3
4
5
6
import numpy as np
a = np.ones(5)
b = torch.from_numpy(a)
np.add(a, 1, out=a)
print(a)
print(b)
[2. 2. 2. 2. 2.]
tensor([2., 2., 2., 2., 2.], dtype=torch.float64)

所有CPU上的Tensor都支持转成numpy或者从numpy转成Tensor。

CUDA Tensors

使用.to方法,Tensor可以被移动到别的device上。

1
2
3
4
5
6
7
8
9
# let us run this cell only if CUDA is available
# We will use ``torch.device`` objects to move tensors in and out of GPU
if torch.cuda.is_available():
device = torch.device("cuda") # a CUDA device object
y = torch.ones_like(x, device=device) # directly create a tensor on GPU
x = x.to(device) # or just use strings ``.to("cuda")``
z = x + y
print(z)
print(z.to("cpu", torch.double)) # ``.to`` can also change dtype together!

热身: 用numpy实现两层神经网络

一个全连接ReLU神经网络,一个隐藏层,没有bias。用来从x预测y,使用L2 Loss。

这一实现完全使用numpy来计算前向神经网络,loss,和反向传播。

numpy ndarray是一个普通的n维array。它不知道任何关于深度学习或者梯度(gradient)的知识,也不知道计算图(computation graph),只是一种用来计算数学运算的数据结构。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
import numpy as np

# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10

# Create radnom input and output data
x = np.random.randn(N, D_in)
y = np.random.randn(N, D_out)

# Randomly initialize wights
w1 = np.random.randn(D_in, H)
w2 = np.random.randn(H, D_out)

learning_rate = 1e-6
for t in range(500): # traing 500 times
# Forward pass: compute predict y
h = x.dot(w1)
h_relu = np.maximum(h, 0)
y_pred = h_relu.dot(w2)

# compute and print loss
loss = np.square(y_pred - y).sum()
print(t, loss)

# backpro compute gradients of w1 and w2 with respect to loss

# loss = (y_pred -y) ** 2
grad_y_pred = 2.0 * (y_pred - y)

# backward gradients ones by ones
grad_w2 = h_relu.T.dot(grad_y_pred)
grad_h_relu = grad_y_pred.dot(w2.T)
grad_h = grad_h_relu.copy()
grad_h[h < 0] = 0
grad_w1= x.T.dot(grad_h)

# Update weights
w1 -= learning_rate * grad_w1
w2 -= learning_rate * grad_w2

0 23734775.983870674
1 18164484.466070224
2 14837301.198169785
3 12230863.71148485
…………
498 0.00021707495985364252
499 0.00020994768522520618

PyTorch: Tensors

这次我们使用PyTorch tensors来创建前向神经网络,计算损失,以及反向传播。

一个PyTorch Tensor很像一个numpy的ndarray。但是它和numpy ndarray最大的区别是,PyTorch Tensor可以在CPU或者GPU上运算。如果想要在GPU上运算,就需要把Tensor换成cuda类型。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
import torch

dtype = torch.float
device = torch.device('cpu')
# device = torch.device('cuda:0') # this run on GPU

# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10


# Create random input and output data
x = torch.randn(N, D_in, device=device, dtype=dtype)
y = torch.randn(N, D_out, device=device, dtype=dtype)

# Randomly initialize weights
w1 = torch.randn(D_in, H, device=device, dtype=dtype)
w2 = torch.randn(H, D_out, device=device, dtype=dtype)

learning_rate = 1e-6
for t in range(500):
# Forward pass: compute predicted y
h = x.mm(w1)
h_relu = h.clamp(min=0)
y_pred = h_relu.mm(w2)

# Compute and print loss
loss = (y_pred - y).pow(2).sum().item()
print(t, loss)

# Backprop to compute gradients of w1 and w2
grad_y_pred = 2.0 * (y_pred - y)
grad_w2 = h_relu.t().mm(grad_y_pred)
grad_h_relu = grad_y_pred.mm(w2.t())
grad_h = grad_h_relu.clone()
grad_h[h < 0] = 0
grad_w1 = x.t().mm(grad_h)


# update weights using gradient descent
w1 -= learning_rate * grad_w1
w2 -= learning_rate * grad_w2
0 23538196.0
1 18919592.0
…………
498 0.00025007393560372293
499 0.0002459314709994942

简单的autograd

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# Create tensors. requires_grad=True. Default False
# tensor 和 Tensor的区别, tensor是整形, Tensor浮点形
x = torch.tensor(1., requires_grad=True)
w = torch.tensor(2., requires_grad=True)
b = torch.tensor(3., requires_grad=True)

# Build a computation graph.
y = w * x + b # y = 2 * x + 3

# Compute gradients.
y.backward()

print(x.grad) # x.grad = 2
print(w.grad) # w.grad = 1
print(b.grad) # b.grad = 1
tensor(2.)
tensor(1.)
tensor(1.)

PyTorch: Tensor和autograd

PyTorch的一个重要功能就是autograd,也就是说只要定义了forward pass(前向神经网络),计算了loss之后,PyTorch可以自动求导计算模型所有参数的梯度。

一个PyTorch的Tensor表示计算图中的一个节点。如果x是一个Tensor并且x.requires_grad=True那么x.grad是另一个储存着x当前梯度(相对于一个scalar,常常是loss)的向量。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
import torch

dtype = torch.float
device = torch.device("cpu")
# device = torch.device("cuda:0") # Uncomment this to run on GPU

# N 是 batch size; D_in 是 input dimension;
# H 是 hidden dimension; D_out 是 output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10

# 创建随机的Tensor来保存输入和输出
# 设定requires_grad=False表示在反向传播的时候我们不需要计算gradient
x = torch.randn(N, D_in, device=device, dtype=dtype)
y = torch.randn(N, D_out, device=device, dtype=dtype)

# 创建随机的Tensor和权重。
# 设置requires_grad=True表示我们希望反向传播的时候计算Tensor的gradient
w1 = torch.randn(D_in, H, device=device, dtype=dtype, requires_grad=True)
w2 = torch.randn(H, D_out, device=device, dtype=dtype, requires_grad=True)

learning_rate = 1e-6
for t in range(500):
# 前向传播:通过Tensor预测y;这个和普通的神经网络的前向传播没有任何不同,
# 但是我们不需要保存网络的中间运算结果,因为我们不需要手动计算反向传播。
y_pred = x.mm(w1).clamp(min=0).mm(w2)

# 通过前向传播计算loss
# loss是一个形状为(1,)的Tensor
# loss.item()可以给我们返回一个loss的scalar
loss = (y_pred - y).pow(2).sum()
print(t, loss.item())

loss.backward()

# 我们可以手动做gradient descent(后面我们会介绍自动的方法)。
# 用torch.no_grad()包含以下statements,因为w1和w2都是requires_grad=True,
# 但是在更新weights之后我们并不需要再做autograd。
# 另一种方法是在weight.data和weight.grad.data上做操作,这样就不会对grad产生影响。
# tensor.data会我们一个tensor,这个tensor和原来的tensor指向相同的内存空间,
# 但是不会记录计算图的历史。
with torch.no_grad():
w1 -= learning_rate * w1.grad
w2 -= learning_rate * w2.grad

# Manually zero the gradients after updating weights
w1.grad.zero_()
w2.grad.zero_()

# 注意这里的w1和w2都是grad.zero_()而不是grad_zero_()

0 24256240.0
1 20202350.0
…………
498 0.0007593631744384766
499 0.0007417545421048999

PyTorch:nn

这次我们使用PyTorch中nn这个库来构建网络。
用PyTorch autograd来构建计算图和计算gradients,
然后PyTorch会帮我们自动计算gradient。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
import torch
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10

# Create random Tensors to hold inputs and outputs
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)

# Use the nn package to define our model as a sequence of layers. nn.Sequential
# is a Module which contains other Modules, and applies them in sequence to
# produce its output. Each Linear Module computes output from input using a
# linear function, and holds internal Tensors for its weight and bias.
model = torch.nn.Sequential(
torch.nn.Linear(D_in, H),
torch.nn.ReLU(),
torch.nn.Linear(H, D_out)
)


loss_fn = torch.nn.MSELoss(reduction='sum')
learning_rate = 1e-4
for t in range(500):
y_pred = model(x)

# Compute and print loss. We pass Tensors containing the predicted and true
# values of y, and the loss function returns a Tensor containing the
# loss.
loss = loss_fn(y_pred, y)
print(t, loss.item())


# Zero the gradients before running the backward pass.
model.zero_grad()


# Backward pass: compute gradient of the loss with respect to all the learnable
# parameters of the model. Internally, the parameters of each Module are stored
# in Tensors with requires_grad=True, so this call will compute gradients for
# all learnable parameters in the model.
loss.backward()

with torch.no_grad():
for param in model.parameters():
param -= learning_rate * param.grad

0 699.1046752929688
1 647.9535522460938
…………
498 9.313514510722598e-07
499 9.014782449412451e-07

PyTorch: optim

这一次我们不再手动更新模型的weights,而是使用optim这个包来帮助我们更新参数。
optim这个package提供了各种不同的模型优化方法,包括SGD+momentum, RMSProp, Adam等等。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
import torch

# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10

# Create random tensor input and output data
x = torch.randn(N, D_in)
Y = torch.randn(N, D_out)

# Use the nn package to define our model and loss function
model = torch.nn.Sequential(
torch.nn.Linear(D_in, H),
torch.nn.ReLU(),
torch.nn.Linear(H, D_out)
)

loss_fn = torch.nn.MSELoss(reduction='sum')

learning_rate = 1e-4
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
for t in range(500):
# Forward pass: compute predicted y by passing x to the model.
y_pred = model(x)


loss = loss_fn(y_pred, y)
print(t, loss.item())

# Before the backward pass, use the optimizer object to zero all of the
# gradients for the variables it will update (which are the learnable
# weights of the model). This is because by default, gradients are
# accumulated in buffers( i.e, not overwritten) whenever .backward()
# is called. Checkout docs of torch.autograd.backward for more details.
optimizer.zero_grad()

# Backward pass: compute gradient of the loss with respect to model
# parameters
loss.backward()

# Calling the step function on an Optimizer makes an update to its
# parameters
optimizer.step()
0 711.3644409179688
1 694.257080078125
…………
498 2.771603502260689e-10
499 2.650134556247963e-10

PyTorch: 自定义 nn Modules

我们可以定义一个模型,这个模型继承自nn.Module类。如果需要定义一个比Sequential模型更加复杂的模型,就需要定义nn.Module模型。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
import torch

class TwoLayerNet(torch.nn.Module):

def __init__(self, D_in, H, D_out):
"""
In the constructor we instantiate two nn.Linear modules and assign them as
member variables.
"""
super(TwoLayerNet, self).__init__()
self.linear1 = torch.nn.Linear(D_in, H)
self.linear2 = torch.nn.Linear(H, D_out)

def forward(self,x):
"""
In the forward function we accept a Tensor of input data and we must return
a Tensor of output data. We can use Modules defined in the constructor as
well as arbitrary operators on Tensors.
"""

h_relu = self.linear1(x).clamp(min=0)
y_pred = self.linear2(h_relu)
return y_pred


# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10

# Create random Tensors to hold inputs and outputs
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)

# Construct our model by instantiating the class defined above
model = TwoLayerNet(D_in, H, D_out)

# Construct our loss function and an Optimizer. The call to model.parameters()
# in the SGD constructor will contain the learnable parameters of the two
# nn.Linear modules which are members of the model.
criterion = torch.nn.MSELoss(reduction='sum')
optimizer = torch.optim.SGD(model.parameters(), lr=1e-4)

for t in range(500):
# Forward pass: Compute predicted y by passing x to the model
y_pred = model(x)

# Compute and print loss
loss = criterion(y_pred, y)
print(t, loss.item())

# Zero gradients, perform a backward pass, and update the weights.
optimizer.zero_grad()
loss.backward()
optimizer.step()
0 684.3630981445312
1 634.8236083984375
…………
498 1.6948175471043214e-05
499 1.6512673028046265e-05

FizzBuzz

FizzBuzz是一个简单的小游戏。游戏规则如下:从1开始往上数数,当遇到3的倍数的时候,说fizz,当遇到5的倍数,说buzz,当遇到15的倍数,就说fizzbuzz,其他情况下则正常数数。

我们可以写一个简单的小程序来决定要返回正常数值还是fizz, buzz 或者 fizzbuzz。

1
2
3
4
5
6
7
8
9
def fizz_buzz_encode(num):
if num % 15 == 0: return 3
elif num % 5 == 0: return 2
elif num % 3 == 0: return 1
else: return 0

def fizz_buzz_decode(num, index):
return [str(num), 'fizz', 'buzz', 'fizzbuzz'][index]

我们首先创建训练数据:

1
2
3
4
5
6
7
8
9
10
11
12
import torch
import numpy as np

NUM_DIGITS = 10

# 数字进行二进制显示为array
# Represent each input by an array of its binary digits.
def binary_encode(num):
return np.array([num >> d & 1 for d in range(NUM_DIGITS)])

trX = torch.Tensor([binary_encode(i) for i in range(101, 2 ** NUM_DIGITS)])
trY = torch.LongTensor([fizz_buzz_encode(i) for i in range(101, 2 ** NUM_DIGITS)])

然后我们用PyTorch定义模型

1
2
3
4
5
6
7
# Define the model
NUM_HIDDEN = 100
model = torch.nn.Sequential(
torch.nn.Linear(NUM_DIGITS, NUM_HIDDEN),
torch.nn.ReLU(),
torch.nn.Linear(NUM_HIDDEN, 4)
)
  • 为了让我们的模型学会FizzBuzz这个游戏,我们需要定义一个损失函数,和一个优化算法。
  • 这个优化算法会不断优化(降低)损失函数,使得模型的在该任务上取得尽可能低的损失值。
  • 损失值低往往表示我们的模型表现好,损失值高表示我们的模型表现差。
  • 由于FizzBuzz游戏本质上是一个分类问题,我们选用Cross Entropyy Loss函数。
  • 优化函数我们选用Stochastic Gradient Descent。
1
2
loss_fn = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.05)

以下是模型的训练代码

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
BATCH_SIZE = 128
for epoch in range(10000):
for start in range(0, len(trX), BATCH_SIZE):
end = start + BATCH_SIZE
testX = trX[start:end]
testY = trY[start:end]

y_pred = model(testX)
loss = loss_fn(y_pred, testY)

optimizer.zero_grad()
loss.backward()
optimizer.step()
loss = loss_fn(model(trX), trY).item()
print('epoch:', epoch, 'loss:', loss)
epoch: 0 loss: 1.1895811557769775
epoch: 1 loss: 1.1572147607803345
…………
epoch: 5097 loss: 0.02661093883216381
epoch: 5098 loss: 0.026590632274746895
epoch: 5099 loss: 0.026588384062051773
…………
epoch: 8016 loss: 0.011458768509328365
epoch: 8017 loss: 0.011454952880740166
epoch: 8018 loss: 0.011453792452812195
…………
epoch: 9998 loss: 0.00786476582288742
epoch: 9999 loss: 0.00786191038787365

最后我们用训练好的模型尝试在1到100这些数字上玩FizzBuzz游戏

1
2
3
4
testX = torch.Tensor([binary_encode(i) for i in range(1, 101)])
with torch.no_grad():
pred_Y = model(testX)
predictions = zip(range(1, 101), pred_Y.max(1)[1])

显示最后玩的结果:

1
2
for num, index in predictions:
print(fizz_buzz_decode(num, index), end=' ')
1 2 fizz 4 buzz fizz 7 8 fizz buzz 11 fizz 13 14 fizzbuzz 16 17 fizz 19 buzz 21 22 23 fizz buzz 26 fizz 28 29 fizzbuzz 31 32 fizz 34 buzz fizz 37 38 fizz buzz 41 fizz 43 44 fizzbuzz 46 47 fizz 49 buzz fizz 52 53 fizz buzz 56 fizz 58 59 fizzbuzz 61 62 fizz 64 buzz fizz 67 68 69 buzz 71 fizz 73 74 fizzbuzz 76 77 fizz 79 buzz 81 82 83 84 buzz 86 87 88 89 fizzbuzz 91 92 93 94 buzz fizz 97 98 fizz buzz

计算总和, 和哪些错误

1
2
3
# 分类问题使用tensor.max(1) 返回最大值的index

print(np.sum(pred_Y.max(1)[1].numpy() == np.array([fizz_buzz_encode(i) for i in range(1, 101)])))
94
1
pred_Y.max(1)[1].numpy() == np.array([fizz_buzz_encode(i) for i in range(1, 101)])
array([ True,  True,  True,  True,  True,  True,  True,  True,  True,
        True,  True,  True,  True,  True,  True,  True,  True,  True,
        True,  True, False,  True,  True,  True,  True,  True,  True,
        True,  True,  True,  True,  True,  True,  True,  True,  True,
        True,  True,  True,  True,  True,  True,  True,  True,  True,
        True,  True,  True,  True,  True,  True,  True,  True,  True,
        True,  True,  True,  True,  True,  True,  True,  True,  True,
        True,  True,  True,  True,  True, False,  True,  True,  True,
        True,  True,  True,  True,  True,  True,  True,  True, False,
        True,  True, False,  True,  True, False,  True,  True,  True,
        True,  True, False,  True,  True,  True,  True,  True,  True,
        True])
1
pred_Y.max(1)
torch.return_types.max(
values=tensor([ 5.9531,  5.9475,  5.8658,  7.1499, 10.3761,  8.2045,  8.1089,  6.6718,
         5.6398,  7.5722,  6.4976,  9.0869,  8.1241,  8.0379,  8.4377,  7.2665,
         7.3191,  7.8744,  5.6651,  7.6147,  2.2065,  7.8375,  4.2420,  6.6890,
         3.5175,  9.0680,  8.0666,  8.6170,  8.5703,  7.5231,  7.3321,  6.3975,
         3.3405,  7.5249,  8.5877,  7.6400,  5.8178,  7.7374,  4.2418,  6.5080,
         7.3046,  6.7514,  2.8480,  8.3515,  5.0900,  8.7313,  6.0416,  8.3804,
         8.2009,  4.7483,  4.1110,  4.2541,  3.0316,  4.7115,  8.0126,  8.8917,
         2.6943,  6.2795,  8.0649,  5.5002,  7.3595,  6.0792,  3.9122,  6.3986,
         6.1323,  6.6845,  7.4449,  7.3432,  4.2463,  8.6458,  8.3745,  8.8894,
         7.9096,  8.1622,  3.5034,  7.2827,  8.8453,  7.8358,  5.6428,  6.7707,
         1.3065,  5.7157,  3.3958,  4.8879,  5.4819,  7.6762,  4.3674,  7.3603,
         8.4167,  6.3594,  4.1352,  8.4632,  5.4859,  5.7701,  6.6684,  8.2688,
         8.7533,  5.7605,  5.5259,  9.0591]),
indices=tensor([0, 0, 1, 0, 2, 1, 0, 0, 1, 2, 0, 1, 0, 0, 3, 0, 0, 1, 0, 2, 0, 0, 0, 1,
        2, 0, 1, 0, 0, 3, 0, 0, 1, 0, 2, 1, 0, 0, 1, 2, 0, 1, 0, 0, 3, 0, 0, 1,
        0, 2, 1, 0, 0, 1, 2, 0, 1, 0, 0, 3, 0, 0, 1, 0, 2, 1, 0, 0, 0, 2, 0, 1,
        0, 0, 3, 0, 0, 1, 0, 2, 0, 0, 0, 0, 2, 0, 0, 0, 0, 3, 0, 0, 0, 0, 2, 1,
        0, 0, 1, 2]))
1
pred_Y.max(1)[1]
tensor([0, 0, 1, 0, 2, 1, 0, 0, 1, 2, 0, 1, 0, 0, 3, 0, 0, 1, 0, 2, 0, 0, 0, 1,
        2, 0, 1, 0, 0, 3, 0, 0, 1, 0, 2, 1, 0, 0, 1, 2, 0, 1, 0, 0, 3, 0, 0, 1,
        0, 2, 1, 0, 0, 1, 2, 0, 1, 0, 0, 3, 0, 0, 1, 0, 2, 1, 0, 0, 0, 2, 0, 1,
        0, 0, 3, 0, 0, 1, 0, 2, 0, 0, 0, 0, 2, 0, 0, 0, 0, 3, 0, 0, 0, 0, 2, 1,
        0, 0, 1, 2])