Artificial Intelligence Nanodegree

Machine Translation Project

In this notebook, sections that end with '(IMPLEMENTATION)' in the header indicate that the following blocks of code will require additional functionality which you must provide. Please be sure to read the instructions carefully!

Introduction

In this notebook, you will build a deep neural network that functions as part of an end-to-end machine translation pipeline. Your completed pipeline will accept English text as input and return the French translation.

  • Preprocess - You'll convert text to sequence of integers.
  • Models Create models which accepts a sequence of integers as input and returns a probability distribution over possible translations. After learning about the basic types of neural networks that are often used for machine translation, you will engage in your own investigations, to design your own model!
  • Prediction Run the model on English text.
In [1]:
%load_ext autoreload
%aimport helper, tests
%autoreload 1
In [2]:
import collections

import helper
import numpy as np
import project_tests as tests

from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.models import Model, Sequential
from keras.layers import GRU, Input, Dense, TimeDistributed, Activation, RepeatVector, Bidirectional, Dropout, LSTM
from keras.layers.embeddings import Embedding
from keras.optimizers import Adam
from keras.losses import sparse_categorical_crossentropy
Using TensorFlow backend.

Verify access to the GPU

The following test applies only if you expect to be using a GPU, e.g., while running in a Udacity Workspace or using an AWS instance with GPU support. Run the next cell, and verify that the device_type is "GPU".

  • If the device is not GPU & you are running from a Udacity Workspace, then save your workspace with the icon at the top, then click "enable" at the bottom of the workspace.
  • If the device is not GPU & you are running from an AWS instance, then refer to the cloud computing instructions in the classroom to verify your setup steps.
In [3]:
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
[name: "/cpu:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 14855457735451908914
, name: "/gpu:0"
device_type: "GPU"
memory_limit: 357302272
locality {
  bus_id: 1
}
incarnation: 17466780435120687828
physical_device_desc: "device: 0, name: Tesla K80, pci bus id: 0000:00:04.0"
]

Dataset

We begin by investigating the dataset that will be used to train and evaluate your pipeline. The most common datasets used for machine translation are from WMT. However, that will take a long time to train a neural network on. We'll be using a dataset we created for this project that contains a small vocabulary. You'll be able to train your model in a reasonable time with this dataset.

Load Data

The data is located in data/small_vocab_en and data/small_vocab_fr. The small_vocab_en file contains English sentences with their French translations in the small_vocab_fr file. Load the English and French data from these files from running the cell below.

In [4]:
# Load English data
english_sentences = helper.load_data('data/small_vocab_en')
# Load French data
french_sentences = helper.load_data('data/small_vocab_fr')

print('Dataset Loaded')
Dataset Loaded

Files

Each line in small_vocab_en contains an English sentence with the respective translation in each line of small_vocab_fr. View the first two lines from each file.

In [5]:
for sample_i in range(5):
    print('English sample {}:  {}'.format(sample_i + 1, english_sentences[sample_i]))
    print('French sample {}:  {}\n'.format(sample_i + 1, french_sentences[sample_i]))
English sample 1:  new jersey is sometimes quiet during autumn , and it is snowy in april .
French sample 1:  new jersey est parfois calme pendant l' automne , et il est neigeux en avril .

English sample 2:  the united states is usually chilly during july , and it is usually freezing in november .
French sample 2:  les états-unis est généralement froid en juillet , et il gèle habituellement en novembre .

English sample 3:  california is usually quiet during march , and it is usually hot in june .
French sample 3:  california est généralement calme en mars , et il est généralement chaud en juin .

English sample 4:  the united states is sometimes mild during june , and it is cold in september .
French sample 4:  les états-unis est parfois légère en juin , et il fait froid en septembre .

English sample 5:  your least liked fruit is the grape , but my least liked is the apple .
French sample 5:  votre moins aimé fruit est le raisin , mais mon moins aimé est la pomme .

From looking at the sentences, you can see they have been preprocessed already. The puncuations have been delimited using spaces. All the text have been converted to lowercase. This should save you some time, but the text requires more preprocessing.

Vocabulary

The complexity of the problem is determined by the complexity of the vocabulary. A more complex vocabulary is a more complex problem. Let's look at the complexity of the dataset we'll be working with.

In [6]:
english_words_counter = collections.Counter([word for sentence in english_sentences for word in sentence.split()])
french_words_counter = collections.Counter([word for sentence in french_sentences for word in sentence.split()])

print('{} English words.'.format(len([word for sentence in english_sentences for word in sentence.split()])))
print('{} unique English words.'.format(len(english_words_counter)))
print('10 Most common words in the English dataset:')
print('"' + '" "'.join(list(zip(*english_words_counter.most_common(10)))[0]) + '"')
print()
print('{} French words.'.format(len([word for sentence in french_sentences for word in sentence.split()])))
print('{} unique French words.'.format(len(french_words_counter)))
print('10 Most common words in the French dataset:')
print('"' + '" "'.join(list(zip(*french_words_counter.most_common(10)))[0]) + '"')
1823250 English words.
227 unique English words.
10 Most common words in the English dataset:
"is" "," "." "in" "it" "during" "the" "but" "and" "sometimes"

1961295 French words.
355 unique French words.
10 Most common words in the French dataset:
"est" "." "," "en" "il" "les" "mais" "et" "la" "parfois"

For comparison, Alice's Adventures in Wonderland contains 2,766 unique words of a total of 15,500 words.

Preprocess

For this project, you won't use text data as input to your model. Instead, you'll convert the text into sequences of integers using the following preprocess methods:

  1. Tokenize the words into ids
  2. Add padding to make all the sequences the same length.

Time to start preprocessing the data...

Tokenize (IMPLEMENTATION)

For a neural network to predict on text data, it first has to be turned into data it can understand. Text data like "dog" is a sequence of ASCII character encodings. Since a neural network is a series of multiplication and addition operations, the input data needs to be number(s).

We can turn each character into a number or each word into a number. These are called character and word ids, respectively. Character ids are used for character level models that generate text predictions for each character. A word level model uses word ids that generate text predictions for each word. Word level models tend to learn better, since they are lower in complexity, so we'll use those.

Turn each sentence into a sequence of words ids using Keras's Tokenizer function. Use this function to tokenize english_sentences and french_sentences in the cell below.

Running the cell will run tokenize on sample data and show output for debugging.

In [7]:
def tokenize(x):
    """
    Tokenize x
    :param x: List of sentences/strings to be tokenized
    :return: Tuple of (tokenized x data, tokenizer used to tokenize x)
    """
    # TODO: Implement
    tokenizer = Tokenizer()
    tokenizer.fit_on_texts(x)
    return tokenizer.texts_to_sequences(x), tokenizer

tests.test_tokenize(tokenize)

# Tokenize Example output
text_sentences = [
    'The quick brown fox jumps over the lazy dog .',
    'By Jove , my quick study of lexicography won a prize .',
    'This is a short sentence .']
text_tokenized, text_tokenizer = tokenize(text_sentences)
print(text_tokenizer.word_index)
print()
for sample_i, (sent, token_sent) in enumerate(zip(text_sentences, text_tokenized)):
    print('Sequence {} in x'.format(sample_i + 1))
    print('  Input:  {}'.format(sent))
    print('  Output: {}'.format(token_sent))
{'the': 1, 'quick': 2, 'a': 3, 'brown': 4, 'fox': 5, 'jumps': 6, 'over': 7, 'lazy': 8, 'dog': 9, 'by': 10, 'jove': 11, 'my': 12, 'study': 13, 'of': 14, 'lexicography': 15, 'won': 16, 'prize': 17, 'this': 18, 'is': 19, 'short': 20, 'sentence': 21}

Sequence 1 in x
  Input:  The quick brown fox jumps over the lazy dog .
  Output: [1, 2, 4, 5, 6, 7, 1, 8, 9]
Sequence 2 in x
  Input:  By Jove , my quick study of lexicography won a prize .
  Output: [10, 11, 12, 2, 13, 14, 15, 16, 3, 17]
Sequence 3 in x
  Input:  This is a short sentence .
  Output: [18, 19, 3, 20, 21]

Padding (IMPLEMENTATION)

When batching the sequence of word ids together, each sequence needs to be the same length. Since sentences are dynamic in length, we can add padding to the end of the sequences to make them the same length.

Make sure all the English sequences have the same length and all the French sequences have the same length by adding padding to the end of each sequence using Keras's pad_sequences function.

In [8]:
def pad(x, length=None):
    """
    Pad x
    :param x: List of sequences.
    :param length: Length to pad the sequence to.  If None, use length of longest sequence in x.
    :return: Padded numpy array of sequences
    """
    # TODO: Implement
    return pad_sequences(x, maxlen=length, padding='post')

tests.test_pad(pad)

# Pad Tokenized output
test_pad = pad(text_tokenized)
for sample_i, (token_sent, pad_sent) in enumerate(zip(text_tokenized, test_pad)):
    print('Sequence {} in x'.format(sample_i + 1))
    print('  Input:  {}'.format(np.array(token_sent)))
    print('  Output: {}'.format(pad_sent))
Sequence 1 in x
  Input:  [1 2 4 5 6 7 1 8 9]
  Output: [1 2 4 5 6 7 1 8 9 0]
Sequence 2 in x
  Input:  [10 11 12  2 13 14 15 16  3 17]
  Output: [10 11 12  2 13 14 15 16  3 17]
Sequence 3 in x
  Input:  [18 19  3 20 21]
  Output: [18 19  3 20 21  0  0  0  0  0]

Preprocess Pipeline

Your focus for this project is to build neural network architecture, so we won't ask you to create a preprocess pipeline. Instead, we've provided you with the implementation of the preprocess function.

In [9]:
def preprocess(x, y):
    """
    Preprocess x and y
    :param x: Feature List of sentences
    :param y: Label List of sentences
    :return: Tuple of (Preprocessed x, Preprocessed y, x tokenizer, y tokenizer)
    """
    preprocess_x, x_tk = tokenize(x)
    preprocess_y, y_tk = tokenize(y)

    preprocess_x = pad(preprocess_x)
    preprocess_y = pad(preprocess_y)

    # Keras's sparse_categorical_crossentropy function requires the labels to be in 3 dimensions
    preprocess_y = preprocess_y.reshape(*preprocess_y.shape, 1)

    return preprocess_x, preprocess_y, x_tk, y_tk

preproc_english_sentences, preproc_french_sentences, english_tokenizer, french_tokenizer =\
    preprocess(english_sentences, french_sentences)
    
max_english_sequence_length = preproc_english_sentences.shape[1]
max_french_sequence_length = preproc_french_sentences.shape[1]
english_vocab_size = len(english_tokenizer.word_index)
french_vocab_size = len(french_tokenizer.word_index)

print('Data Preprocessed')
print("Max English sentence length:", max_english_sequence_length)
print("Max French sentence length:", max_french_sequence_length)
print("English vocabulary size:", english_vocab_size)
print("French vocabulary size:", french_vocab_size)
Data Preprocessed
Max English sentence length: 15
Max French sentence length: 21
English vocabulary size: 199
French vocabulary size: 344

Models

In this section, you will experiment with various neural network architectures. You will begin by training four relatively simple architectures.

  • Model 1 is a simple RNN
  • Model 2 is a RNN with Embedding
  • Model 3 is a Bidirectional RNN
  • Model 4 is an optional Encoder-Decoder RNN

After experimenting with the four simple architectures, you will construct a deeper architecture that is designed to outperform all four models.

Ids Back to Text

The neural network will be translating the input to words ids, which isn't the final form we want. We want the French translation. The function logits_to_text will bridge the gab between the logits from the neural network to the French translation. You'll be using this function to better understand the output of the neural network.

In [10]:
def logits_to_text(logits, tokenizer):
    """
    Turn logits from a neural network into text using the tokenizer
    :param logits: Logits from a neural network
    :param tokenizer: Keras Tokenizer fit on the labels
    :return: String that represents the text of the logits
    """
    index_to_words = {id: word for word, id in tokenizer.word_index.items()}
    index_to_words[0] = '<PAD>'

    return ' '.join([index_to_words[prediction] for prediction in np.argmax(logits, 1)])

print('`logits_to_text` function loaded.')
`logits_to_text` function loaded.

Model 1: RNN (IMPLEMENTATION)

RNN A basic RNN model is a good baseline for sequence data. In this model, you'll build a RNN that translates English to French.

In [11]:
def simple_model(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
    """
    Build and train a basic RNN on x and y
    :param input_shape: Tuple of input shape
    :param output_sequence_length: Length of output sequence
    :param english_vocab_size: Number of unique English words in the dataset
    :param french_vocab_size: Number of unique French words in the dataset
    :return: Keras model built, but not trained
    """
    # Hyperparameters
    learning_rate = 0.005
    
    # TODO: Build the layers
    model = Sequential()
    model.add(GRU(256, input_shape=input_shape[1:], return_sequences=True))
    model.add(TimeDistributed(Dense(1024, activation='relu')))
    model.add(Dropout(0.5))
    model.add(TimeDistributed(Dense(french_vocab_size, activation='softmax'))) 

    # Compile model
    model.compile(loss=sparse_categorical_crossentropy,
                  optimizer=Adam(learning_rate),
                  metrics=['accuracy'])
    return model

tests.test_simple_model(simple_model)

# Reshaping the input to work with a basic RNN
tmp_x = pad(preproc_english_sentences, max_french_sequence_length)
tmp_x = tmp_x.reshape((-1, preproc_french_sentences.shape[-2], 1))

# Train the neural network
simple_rnn_model = simple_model(
    tmp_x.shape,
    max_french_sequence_length,
    english_vocab_size,
    french_vocab_size)

print(simple_rnn_model.summary())

simple_rnn_model.fit(tmp_x, preproc_french_sentences, batch_size=1024, epochs=10, validation_split=0.2)

# Print prediction(s)
print(logits_to_text(simple_rnn_model.predict(tmp_x[:1])[0], french_tokenizer))
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
gru_2 (GRU)                  (None, 21, 256)           198144    
_________________________________________________________________
time_distributed_3 (TimeDist (None, 21, 1024)          263168    
_________________________________________________________________
dropout_2 (Dropout)          (None, 21, 1024)          0         
_________________________________________________________________
time_distributed_4 (TimeDist (None, 21, 344)           352600    
=================================================================
Total params: 813,912
Trainable params: 813,912
Non-trainable params: 0
_________________________________________________________________
None
Train on 110288 samples, validate on 27573 samples
Epoch 1/10
110288/110288 [==============================] - 18s 163us/step - loss: 1.9116 - acc: 0.5464 - val_loss: nan - val_acc: 0.6380
Epoch 2/10
110288/110288 [==============================] - 16s 144us/step - loss: 1.2060 - acc: 0.6474 - val_loss: nan - val_acc: 0.6789
Epoch 3/10
110288/110288 [==============================] - 16s 145us/step - loss: 1.0645 - acc: 0.6739 - val_loss: nan - val_acc: 0.6931
Epoch 4/10
110288/110288 [==============================] - 16s 145us/step - loss: 0.9787 - acc: 0.6880 - val_loss: nan - val_acc: 0.7117
Epoch 5/10
110288/110288 [==============================] - 16s 145us/step - loss: 0.9178 - acc: 0.7009 - val_loss: nan - val_acc: 0.7259
Epoch 6/10
110288/110288 [==============================] - 16s 144us/step - loss: 0.8632 - acc: 0.7162 - val_loss: nan - val_acc: 0.7478
Epoch 7/10
110288/110288 [==============================] - 16s 144us/step - loss: 0.8034 - acc: 0.7344 - val_loss: nan - val_acc: 0.7637
Epoch 8/10
110288/110288 [==============================] - 16s 144us/step - loss: 0.7656 - acc: 0.7438 - val_loss: nan - val_acc: 0.7587
Epoch 9/10
110288/110288 [==============================] - 16s 144us/step - loss: 0.7316 - acc: 0.7532 - val_loss: nan - val_acc: 0.7857
Epoch 10/10
110288/110288 [==============================] - 16s 144us/step - loss: 0.7054 - acc: 0.7603 - val_loss: nan - val_acc: 0.7369
new jersey est parfois calme en mois de mai et il est neigeux en <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD>
In [12]:
# Print prediction(s)
print("Prediction:")
print(logits_to_text(simple_rnn_model.predict(tmp_x[:1])[0], french_tokenizer))

print("\nCorrect Translation:")
print(french_sentences[:1])

print("\nOriginal text:")
print(english_sentences[:1])
Prediction:
new jersey est parfois calme en mois de mai et il est neigeux en <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD>

Correct Translation:
["new jersey est parfois calme pendant l' automne , et il est neigeux en avril ."]

Original text:
['new jersey is sometimes quiet during autumn , and it is snowy in april .']

Model 2: Embedding (IMPLEMENTATION)

RNN You've turned the words into ids, but there's a better representation of a word. This is called word embeddings. An embedding is a vector representation of the word that is close to similar words in n-dimensional space, where the n represents the size of the embedding vectors.

In this model, you'll create a RNN model using embedding.

In [13]:
def embed_model(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
    """
    Build and train a RNN model using word embedding on x and y
    :param input_shape: Tuple of input shape
    :param output_sequence_length: Length of output sequence
    :param english_vocab_size: Number of unique English words in the dataset
    :param french_vocab_size: Number of unique French words in the dataset
    :return: Keras model built, but not trained
    """
    # TODO: Implement

    # Hyperparameters
    learning_rate = 0.005
    
    # TODO: Build the layers
    model = Sequential()
    model.add(Embedding(english_vocab_size, 256, input_length=input_shape[1], input_shape=input_shape[1:]))
    model.add(GRU(256, return_sequences=True))    
    model.add(TimeDistributed(Dense(1024, activation='relu')))
    model.add(Dropout(0.5))
    model.add(TimeDistributed(Dense(french_vocab_size, activation='softmax'))) 

    # Compile model
    model.compile(loss=sparse_categorical_crossentropy,
                  optimizer=Adam(learning_rate),
                  metrics=['accuracy'])
    return model

tests.test_embed_model(embed_model)

# TODO: Reshape the input
tmp_x = pad(preproc_english_sentences, preproc_french_sentences.shape[1])
tmp_x = tmp_x.reshape((-1, preproc_french_sentences.shape[-2]))

# TODO: Train the neural network
embed_rnn_model = embed_model(
    tmp_x.shape,
    preproc_french_sentences.shape[1],
    len(english_tokenizer.word_index)+1,
    len(french_tokenizer.word_index)+1)

embed_rnn_model.summary()

embed_rnn_model.fit(tmp_x, preproc_french_sentences, batch_size=1024, epochs=10, validation_split=0.2)

# TODO: Print prediction(s)
print(logits_to_text(embed_rnn_model.predict(tmp_x[:1])[0], french_tokenizer))
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding_2 (Embedding)      (None, 21, 256)           51200     
_________________________________________________________________
gru_4 (GRU)                  (None, 21, 256)           393984    
_________________________________________________________________
time_distributed_7 (TimeDist (None, 21, 1024)          263168    
_________________________________________________________________
dropout_4 (Dropout)          (None, 21, 1024)          0         
_________________________________________________________________
time_distributed_8 (TimeDist (None, 21, 345)           353625    
=================================================================
Total params: 1,061,977
Trainable params: 1,061,977
Non-trainable params: 0
_________________________________________________________________
Train on 110288 samples, validate on 27573 samples
Epoch 1/10
110288/110288 [==============================] - 20s 178us/step - loss: 1.4683 - acc: 0.6636 - val_loss: 0.5285 - val_acc: 0.8312
Epoch 2/10
110288/110288 [==============================] - 19s 175us/step - loss: 0.4413 - acc: 0.8563 - val_loss: 0.3256 - val_acc: 0.8903
Epoch 3/10
110288/110288 [==============================] - 19s 175us/step - loss: 0.3123 - acc: 0.8955 - val_loss: 0.2554 - val_acc: 0.9134
Epoch 4/10
110288/110288 [==============================] - 19s 175us/step - loss: 0.2573 - acc: 0.9132 - val_loss: 0.2263 - val_acc: 0.9217
Epoch 5/10
110288/110288 [==============================] - 19s 175us/step - loss: 0.2279 - acc: 0.9221 - val_loss: 0.2084 - val_acc: 0.9284
Epoch 6/10
110288/110288 [==============================] - 19s 174us/step - loss: 0.2119 - acc: 0.9270 - val_loss: 0.1967 - val_acc: 0.9320
Epoch 7/10
110288/110288 [==============================] - 19s 174us/step - loss: 0.2009 - acc: 0.9300 - val_loss: 0.1923 - val_acc: 0.9332
Epoch 8/10
110288/110288 [==============================] - 19s 174us/step - loss: 0.1924 - acc: 0.9326 - val_loss: 0.1916 - val_acc: 0.9328
Epoch 9/10
110288/110288 [==============================] - 19s 174us/step - loss: 0.1891 - acc: 0.9334 - val_loss: 0.1864 - val_acc: 0.9345
Epoch 10/10
110288/110288 [==============================] - 19s 174us/step - loss: 0.1835 - acc: 0.9349 - val_loss: 0.1808 - val_acc: 0.9365
new jersey est parfois calme en l' automne et il est neigeux en avril <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD>
In [14]:
# Print prediction(s)
print("Prediction:")
print(logits_to_text(embed_rnn_model.predict(tmp_x[:1])[0], french_tokenizer))

print("\nCorrect Translation:")
print(french_sentences[:1])

print("\nOriginal text:")
print(english_sentences[:1])
Prediction:
new jersey est parfois calme en l' automne et il est neigeux en avril <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD>

Correct Translation:
["new jersey est parfois calme pendant l' automne , et il est neigeux en avril ."]

Original text:
['new jersey is sometimes quiet during autumn , and it is snowy in april .']

Model 3: Bidirectional RNNs (IMPLEMENTATION)

RNN One restriction of a RNN is that it can't see the future input, only the past. This is where bidirectional recurrent neural networks come in. They are able to see the future data.

In [15]:
def bd_model(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
    """
    Build and train a bidirectional RNN model on x and y
    :param input_shape: Tuple of input shape
    :param output_sequence_length: Length of output sequence
    :param english_vocab_size: Number of unique English words in the dataset
    :param french_vocab_size: Number of unique French words in the dataset
    :return: Keras model built, but not trained
    """
    # TODO: Implement

    # Hyperparameters
    learning_rate = 0.003
    
    # TODO: Build the layers
    model = Sequential()
    model.add(Bidirectional(GRU(128, return_sequences=True), input_shape=input_shape[1:]))
    model.add(TimeDistributed(Dense(1024, activation='relu')))
    model.add(Dropout(0.5))
    model.add(TimeDistributed(Dense(french_vocab_size, activation='softmax'))) 

    # Compile model
    model.compile(loss=sparse_categorical_crossentropy,
                  optimizer=Adam(learning_rate),
                  metrics=['accuracy'])
    return model

tests.test_bd_model(bd_model)

# TODO: Reshape the input
tmp_x = pad(preproc_english_sentences, preproc_french_sentences.shape[1])
tmp_x = tmp_x.reshape((-1, preproc_french_sentences.shape[-2]))

# TODO: Train and Print prediction(s)
embed_rnn_model = embed_model(
    tmp_x.shape,
    preproc_french_sentences.shape[1],
    len(english_tokenizer.word_index)+1,
    len(french_tokenizer.word_index)+1)

embed_rnn_model.summary()

embed_rnn_model.fit(tmp_x, preproc_french_sentences, batch_size=1024, epochs=10, validation_split=0.2)

print(logits_to_text(embed_rnn_model.predict(tmp_x[:1])[0], french_tokenizer))
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding_3 (Embedding)      (None, 21, 256)           51200     
_________________________________________________________________
gru_6 (GRU)                  (None, 21, 256)           393984    
_________________________________________________________________
time_distributed_11 (TimeDis (None, 21, 1024)          263168    
_________________________________________________________________
dropout_6 (Dropout)          (None, 21, 1024)          0         
_________________________________________________________________
time_distributed_12 (TimeDis (None, 21, 345)           353625    
=================================================================
Total params: 1,061,977
Trainable params: 1,061,977
Non-trainable params: 0
_________________________________________________________________
Train on 110288 samples, validate on 27573 samples
Epoch 1/10
110288/110288 [==============================] - 20s 179us/step - loss: 1.4631 - acc: 0.6646 - val_loss: 0.5201 - val_acc: 0.8348
Epoch 2/10
110288/110288 [==============================] - 19s 175us/step - loss: 0.4366 - acc: 0.8579 - val_loss: 0.3183 - val_acc: 0.8946
Epoch 3/10
110288/110288 [==============================] - 19s 175us/step - loss: 0.3064 - acc: 0.8986 - val_loss: 0.2558 - val_acc: 0.9141
Epoch 4/10
110288/110288 [==============================] - 19s 175us/step - loss: 0.2555 - acc: 0.9144 - val_loss: 0.2284 - val_acc: 0.9214
Epoch 5/10
110288/110288 [==============================] - 19s 175us/step - loss: 0.2280 - acc: 0.9224 - val_loss: 0.2102 - val_acc: 0.9267
Epoch 6/10
110288/110288 [==============================] - 19s 174us/step - loss: 0.2140 - acc: 0.9265 - val_loss: 0.2024 - val_acc: 0.9310
Epoch 7/10
110288/110288 [==============================] - 19s 175us/step - loss: 0.2014 - acc: 0.9300 - val_loss: 0.1979 - val_acc: 0.9316
Epoch 8/10
110288/110288 [==============================] - 19s 174us/step - loss: 0.1945 - acc: 0.9320 - val_loss: 0.1908 - val_acc: 0.9335
Epoch 9/10
110288/110288 [==============================] - 19s 175us/step - loss: 0.1877 - acc: 0.9341 - val_loss: 0.1875 - val_acc: 0.9355
Epoch 10/10
110288/110288 [==============================] - 19s 174us/step - loss: 0.1839 - acc: 0.9351 - val_loss: 0.1853 - val_acc: 0.9351
new jersey est parfois calme en l' automne et il est neigeux en avril <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD>
In [16]:
# Print prediction(s)
print("Prediction:")
print(logits_to_text(embed_rnn_model.predict(tmp_x[:1])[0], french_tokenizer))

print("\nCorrect Translation:")
print(french_sentences[:1])

print("\nOriginal text:")
print(english_sentences[:1])
Prediction:
new jersey est parfois calme en l' automne et il est neigeux en avril <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD>

Correct Translation:
["new jersey est parfois calme pendant l' automne , et il est neigeux en avril ."]

Original text:
['new jersey is sometimes quiet during autumn , and it is snowy in april .']

Model 4: Encoder-Decoder (OPTIONAL)

Time to look at encoder-decoder models. This model is made up of an encoder and decoder. The encoder creates a matrix representation of the sentence. The decoder takes this matrix as input and predicts the translation as output.

Create an encoder-decoder model in the cell below.

In [17]:
def encdec_model(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
    """
    Build and train an encoder-decoder model on x and y
    :param input_shape: Tuple of input shape
    :param output_sequence_length: Length of output sequence
    :param english_vocab_size: Number of unique English words in the dataset
    :param french_vocab_size: Number of unique French words in the dataset
    :return: Keras model built, but not trained
    """
    # OPTIONAL: Implement
    
    # Hyperparameters
    learning_rate = 0.001
    
    # Build the layers    
    model = Sequential()
    # Encoder
    model.add(GRU(256, input_shape=input_shape[1:], go_backwards=True))
    model.add(RepeatVector(output_sequence_length))
    # Decoder
    model.add(GRU(256, return_sequences=True))
    model.add(TimeDistributed(Dense(1024, activation='relu')))
    model.add(Dropout(0.5))
    model.add(TimeDistributed(Dense(french_vocab_size, activation='softmax')))

    # Compile model
    model.compile(loss=sparse_categorical_crossentropy,
                  optimizer=Adam(learning_rate),
                  metrics=['accuracy'])
    
    return model

tests.test_encdec_model(encdec_model)

# Reshape the input
tmp_x = pad(preproc_english_sentences, preproc_french_sentences.shape[1])
tmp_x = tmp_x.reshape((-1, preproc_french_sentences.shape[-2], 1))

# Train and Print prediction(s)
encdec_rnn_model = encdec_model(
    tmp_x.shape,
    preproc_french_sentences.shape[1],
    len(english_tokenizer.word_index)+1,
    len(french_tokenizer.word_index)+1)

encdec_rnn_model.summary()

encdec_rnn_model.fit(tmp_x, preproc_french_sentences, batch_size=1024, epochs=10, validation_split=0.2)
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
gru_9 (GRU)                  (None, 256)               198144    
_________________________________________________________________
repeat_vector_2 (RepeatVecto (None, 21, 256)           0         
_________________________________________________________________
gru_10 (GRU)                 (None, 21, 256)           393984    
_________________________________________________________________
time_distributed_15 (TimeDis (None, 21, 1024)          263168    
_________________________________________________________________
dropout_8 (Dropout)          (None, 21, 1024)          0         
_________________________________________________________________
time_distributed_16 (TimeDis (None, 21, 345)           353625    
=================================================================
Total params: 1,208,921
Trainable params: 1,208,921
Non-trainable params: 0
_________________________________________________________________
Train on 110288 samples, validate on 27573 samples
Epoch 1/10
110288/110288 [==============================] - 26s 234us/step - loss: 2.4930 - acc: 0.4710 - val_loss: 1.7839 - val_acc: 0.5645
Epoch 2/10
110288/110288 [==============================] - 25s 227us/step - loss: 1.6361 - acc: 0.5765 - val_loss: 1.4294 - val_acc: 0.6144
Epoch 3/10
110288/110288 [==============================] - 25s 228us/step - loss: 1.4338 - acc: 0.6076 - val_loss: 1.3340 - val_acc: 0.6338
Epoch 4/10
110288/110288 [==============================] - 25s 228us/step - loss: 1.3377 - acc: 0.6287 - val_loss: 1.2601 - val_acc: 0.6497
Epoch 5/10
110288/110288 [==============================] - 25s 228us/step - loss: 1.2738 - acc: 0.6440 - val_loss: 1.2016 - val_acc: 0.6616
Epoch 6/10
110288/110288 [==============================] - 25s 228us/step - loss: 1.2218 - acc: 0.6536 - val_loss: 1.1437 - val_acc: 0.6703
Epoch 7/10
110288/110288 [==============================] - 25s 228us/step - loss: 1.1717 - acc: 0.6631 - val_loss: 1.1550 - val_acc: 0.6617
Epoch 8/10
110288/110288 [==============================] - 25s 228us/step - loss: 1.1286 - acc: 0.6721 - val_loss: 1.0781 - val_acc: 0.6819
Epoch 9/10
110288/110288 [==============================] - 25s 228us/step - loss: 1.1025 - acc: 0.6777 - val_loss: 1.0549 - val_acc: 0.6896
Epoch 10/10
110288/110288 [==============================] - 25s 228us/step - loss: 1.0869 - acc: 0.6805 - val_loss: 1.0342 - val_acc: 0.6938
Out[17]:
<keras.callbacks.History at 0x7f74230eb5f8>
In [18]:
# Print prediction(s)
print("Prediction:")
print(logits_to_text(encdec_rnn_model.predict(tmp_x[:1])[0], french_tokenizer))

print("\nCorrect Translation:")
print(french_sentences[:1])

print("\nOriginal text:")
print(english_sentences[:1])
Prediction:
new jersey est parfois chaud en l' et il est est en en <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD>

Correct Translation:
["new jersey est parfois calme pendant l' automne , et il est neigeux en avril ."]

Original text:
['new jersey is sometimes quiet during autumn , and it is snowy in april .']

Model 5: Custom (IMPLEMENTATION)

Use everything you learned from the previous models to create a model that incorporates embedding and a bidirectional rnn into one model.

In [19]:
def model_final(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
    """
    Build and train a model that incorporates embedding, encoder-decoder, and bidirectional RNN on x and y
    :param input_shape: Tuple of input shape
    :param output_sequence_length: Length of output sequence
    :param english_vocab_size: Number of unique English words in the dataset
    :param french_vocab_size: Number of unique French words in the dataset
    :return: Keras model built, but not trained
    """
    # TODO: Implement

    # Hyperparameters
    learning_rate = 0.003
    
    # Build the layers    
    model = Sequential()
    # Embedding
    model.add(Embedding(english_vocab_size, 128, input_length=input_shape[1],
                         input_shape=input_shape[1:]))
    # Encoder
    model.add(Bidirectional(GRU(128)))
    model.add(RepeatVector(output_sequence_length))
    # Decoder
    model.add(Bidirectional(GRU(128, return_sequences=True)))
    model.add(TimeDistributed(Dense(512, activation='relu')))
    model.add(Dropout(0.5))
    model.add(TimeDistributed(Dense(french_vocab_size, activation='softmax')))
    model.compile(loss=sparse_categorical_crossentropy,
                  optimizer=Adam(learning_rate),
                  metrics=['accuracy'])
    return model

tests.test_model_final(model_final)

print('Final Model Loaded')
Final Model Loaded

Prediction (IMPLEMENTATION)

In [20]:
def final_predictions(x, y, x_tk, y_tk):
    """
    Gets predictions using the final model
    :param x: Preprocessed English data
    :param y: Preprocessed French data
    :param x_tk: English tokenizer
    :param y_tk: French tokenizer
    """
    # TODO: Train neural network using model_final
    model = model_final(x.shape,y.shape[1],
                        len(x_tk.word_index)+1,
                        len(y_tk.word_index)+1)
    model.summary()
    model.fit(x, y, batch_size=1024, epochs=25, validation_split=0.2)

    
    ## DON'T EDIT ANYTHING BELOW THIS LINE
    y_id_to_word = {value: key for key, value in y_tk.word_index.items()}
    y_id_to_word[0] = '<PAD>'

    sentence = 'he saw a old yellow truck'
    sentence = [x_tk.word_index[word] for word in sentence.split()]
    sentence = pad_sequences([sentence], maxlen=x.shape[-1], padding='post')
    sentences = np.array([sentence[0], x[0]])
    predictions = model.predict(sentences, len(sentences))

    print('Sample 1:')
    print(' '.join([y_id_to_word[np.argmax(x)] for x in predictions[0]]))
    print('Il a vu un vieux camion jaune')
    print('Sample 2:')
    print(' '.join([y_id_to_word[np.argmax(x)] for x in predictions[1]]))
    print(' '.join([y_id_to_word[np.max(x)] for x in y[0]]))


final_predictions(preproc_english_sentences, preproc_french_sentences, english_tokenizer, french_tokenizer)
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding_5 (Embedding)      (None, 15, 128)           25600     
_________________________________________________________________
bidirectional_4 (Bidirection (None, 256)               197376    
_________________________________________________________________
repeat_vector_4 (RepeatVecto (None, 21, 256)           0         
_________________________________________________________________
bidirectional_5 (Bidirection (None, 21, 256)           295680    
_________________________________________________________________
time_distributed_19 (TimeDis (None, 21, 512)           131584    
_________________________________________________________________
dropout_10 (Dropout)         (None, 21, 512)           0         
_________________________________________________________________
time_distributed_20 (TimeDis (None, 21, 345)           176985    
=================================================================
Total params: 827,225
Trainable params: 827,225
Non-trainable params: 0
_________________________________________________________________
Train on 110288 samples, validate on 27573 samples
Epoch 1/25
110288/110288 [==============================] - 26s 240us/step - loss: 2.3767 - acc: 0.4869 - val_loss: 1.6101 - val_acc: 0.5805
Epoch 2/25
110288/110288 [==============================] - 25s 229us/step - loss: 1.4426 - acc: 0.6169 - val_loss: 1.2069 - val_acc: 0.6733
Epoch 3/25
110288/110288 [==============================] - 25s 229us/step - loss: 1.1652 - acc: 0.6749 - val_loss: 1.0104 - val_acc: 0.7148
Epoch 4/25
110288/110288 [==============================] - 25s 227us/step - loss: 1.0312 - acc: 0.7028 - val_loss: 0.8671 - val_acc: 0.7403
Epoch 5/25
110288/110288 [==============================] - 25s 229us/step - loss: 0.8792 - acc: 0.7353 - val_loss: 0.7295 - val_acc: 0.7727
Epoch 6/25
110288/110288 [==============================] - 25s 229us/step - loss: 0.7611 - acc: 0.7647 - val_loss: 0.6033 - val_acc: 0.8118
Epoch 7/25
110288/110288 [==============================] - 25s 229us/step - loss: 0.6655 - acc: 0.7905 - val_loss: 0.5105 - val_acc: 0.8414
Epoch 8/25
110288/110288 [==============================] - 25s 229us/step - loss: 0.5366 - acc: 0.8301 - val_loss: 0.4155 - val_acc: 0.8703
Epoch 9/25
110288/110288 [==============================] - 25s 229us/step - loss: 0.4506 - acc: 0.8574 - val_loss: 0.3298 - val_acc: 0.9017
Epoch 10/25
110288/110288 [==============================] - 25s 229us/step - loss: 0.3838 - acc: 0.8802 - val_loss: 0.2805 - val_acc: 0.9198
Epoch 11/25
110288/110288 [==============================] - 25s 229us/step - loss: 0.3236 - acc: 0.9008 - val_loss: 0.2133 - val_acc: 0.9422
Epoch 12/25
110288/110288 [==============================] - 25s 229us/step - loss: 0.2707 - acc: 0.9185 - val_loss: 0.1839 - val_acc: 0.9496
Epoch 13/25
110288/110288 [==============================] - 25s 229us/step - loss: 0.2311 - acc: 0.9311 - val_loss: 0.1610 - val_acc: 0.9554
Epoch 14/25
110288/110288 [==============================] - 25s 229us/step - loss: 0.2061 - acc: 0.9389 - val_loss: 0.1486 - val_acc: 0.9587
Epoch 15/25
110288/110288 [==============================] - 25s 229us/step - loss: 0.1806 - acc: 0.9462 - val_loss: 0.1400 - val_acc: 0.9608
Epoch 16/25
110288/110288 [==============================] - 25s 228us/step - loss: 0.1653 - acc: 0.9508 - val_loss: 0.1220 - val_acc: 0.9647
Epoch 17/25
110288/110288 [==============================] - 25s 228us/step - loss: 0.1978 - acc: 0.9393 - val_loss: 0.1268 - val_acc: 0.9633
Epoch 18/25
110288/110288 [==============================] - 25s 229us/step - loss: 0.1445 - acc: 0.9569 - val_loss: 0.1098 - val_acc: 0.9688
Epoch 19/25
110288/110288 [==============================] - 25s 229us/step - loss: 0.1284 - acc: 0.9619 - val_loss: 0.1026 - val_acc: 0.9704
Epoch 20/25
110288/110288 [==============================] - 25s 229us/step - loss: 0.1229 - acc: 0.9631 - val_loss: 0.1036 - val_acc: 0.9701
Epoch 21/25
110288/110288 [==============================] - 25s 229us/step - loss: 0.1178 - acc: 0.9649 - val_loss: 0.0947 - val_acc: 0.9731
Epoch 22/25
110288/110288 [==============================] - 25s 229us/step - loss: 0.1059 - acc: 0.9683 - val_loss: 0.0956 - val_acc: 0.9742
Epoch 23/25
110288/110288 [==============================] - 25s 229us/step - loss: 0.1007 - acc: 0.9696 - val_loss: 0.0875 - val_acc: 0.9750
Epoch 24/25
110288/110288 [==============================] - 25s 229us/step - loss: 0.6314 - acc: 0.8175 - val_loss: 0.3761 - val_acc: 0.8839
Epoch 25/25
110288/110288 [==============================] - 25s 229us/step - loss: 0.4125 - acc: 0.8758 - val_loss: 0.1613 - val_acc: 0.9634
Sample 1:
il a vu un vieux camion jaune <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD>
Il a vu un vieux camion jaune
Sample 2:
new jersey est parfois calme pendant l' automne et il est neigeux en avril <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD>
new jersey est parfois calme pendant l' automne et il est neigeux en avril <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD>

Submission

When you're ready to submit, complete the following steps:

  1. Review the rubric to ensure your submission meets all requirements to pass
  2. Generate an HTML version of this notebook

    • Run the next cell to attempt automatic generation (this is the recommended method in Workspaces)
    • Navigate to FILE -> Download as -> HTML (.html)
    • Manually generate a copy using nbconvert from your shell terminal
      $ pip install nbconvert
      $ python -m nbconvert machine_translation.ipynb
  3. Submit the project

    • If you are in a Workspace, simply click the "Submit Project" button (bottom towards the right)

    • Otherwise, add the following files into a zip archive and submit them

    • helper.py
    • machine_translation.ipynb
    • machine_translation.html
      • You can export the notebook by navigating to File -> Download as -> HTML (.html).

Generate the html

Save your notebook before running the next cell to generate the HTML output. Then submit your project.

In [21]:
# Save before you run this cell!
!!jupyter nbconvert *.ipynb
Out[21]:
['[NbConvertApp] Converting notebook machine_translation.ipynb to html',
 '[NbConvertApp] Writing 360541 bytes to machine_translation.html',
 '[NbConvertApp] Converting notebook machine_translation_template.ipynb to html',
 '[NbConvertApp] Writing 306476 bytes to machine_translation_template.html']

Optional Enhancements

This project focuses on learning various network architectures for machine translation, but we don't evaluate the models according to best practices by splitting the data into separate test & training sets -- so the model accuracy is overstated. Use the sklearn.model_selection.train_test_split() function to create separate training & test datasets, then retrain each of the models using only the training set and evaluate the prediction accuracy using the hold out test set. Does the "best" model change?