We first explore a simple scenario where our vocabulary contains only 2 words, . Let and we set the initial state and the weights before the last layer as follows:

Now given 3 training sentences: , ,

Encode each of them into a sequence of vectors. As an example, the sentence is encoded as , where .

(To enter the sequence above, type [[1,0],[1,0]].)

Now encode the other 2 sentences into and .

1 answer

To encode the given sentences into sequences of vectors using the specified vocabulary and initial states, we need to convert each word into its respective vector representation based on the encoding provided.

Let's define your vocabulary:

  • Word 1: \( \text{(a)} \) which corresponds to the vector \([1, 0]\)
  • Word 2: \( \text{(b)} \) which corresponds to the vector \([0, 1]\)

Now let's encode each of the three training sentences. Assuming the sentences are:

  1. Sentence 1: (a) (a)
  2. Sentence 2: (a) (b)
  3. Sentence 3: (b) (b)

Encoding the sentences:

  1. Sentence 1: (a) (a)

    • Encoding: \[ \text{(a)} \rightarrow [1, 0], \quad \text{(a)} \rightarrow [1, 0] \]
    • Thus, the encoded sequence is: \[ \mathbf{s_1} = [[1, 0], [1, 0]] \]
  2. Sentence 2: (a) (b)

    • Encoding: \[ \text{(a)} \rightarrow [1, 0], \quad \text{(b)} \rightarrow [0, 1] \]
    • Thus, the encoded sequence is: \[ \mathbf{s_2} = [[1, 0], [0, 1]] \]
  3. Sentence 3: (b) (b)

    • Encoding: \[ \text{(b)} \rightarrow [0, 1], \quad \text{(b)} \rightarrow [0, 1] \]
    • Thus, the encoded sequence is: \[ \mathbf{s_3} = [[0, 1], [0, 1]] \]

Final Encoded Sequences:

  • \(\mathbf{s_1} = [[1, 0], [1, 0]]\)
  • \(\mathbf{s_2} = [[1, 0], [0, 1]]\)
  • \(\mathbf{s_3} = [[0, 1], [0, 1]]\)

These are the encoded sequences for the provided sentences.