The project team engineered deep learning models for natural language translation (Portuguese to English). We developed a seq2seq model with Long Short-term Memory (LSTM) and a Transformer model. We evaluated the models with a BLEU (Bilingual Evaluation Understudy) score, and the Transformer model scored 0.45, which was higher than the score of the seq2seq model (0.42).
The presentation slides are also available here.