This microservice was designed to express the meaning of an input sentence in a different wording while maintaining fluency. In its core, a NLP transformer which takes an English sentence as input and produces a set of paraphrased sentences. This is an NLP task of conditional text-generation. The model used here is the T5ForConditionalGeneration from the huggingface transformers library. This model is trained on the Google's PAWS Dataset and the model is saved in the transformer model hub of hugging face library under the name Vamsi/T5_Paraphrase_Paws. The dataset in itself contains 108,463 human-labeled and 656k noisily labeled pairs that feature the importance of modeling structure, context, and word order information for the problem of paraphrase identification. These pairs were generated through a combination of word swapping and back translation with human judgement labeling.