Word Embedding Model Language Please select the language of the model to use in this word embedding microservice:
Word Embedding Functions

The main concept of word embeddings is that every word in a language can be represented by a set of N real values (word vector), which are set to capture its meaning, its context and its relationship with other words. Click here for more information on Word Embedding.

In this microservice, we used French and English models trained on a dump of the French and English contents of Wikipedia dating back to 2019-07-01 and 2019-11-01.

Please select a word embedding function to test:

This function returns the most similar words. Positive words contribute positively towards the similarity, negative words negatively.

This function returns the best center words given the input context words.

This function returns the unmatched word, i.e. the word which matches the least with other words in the input list.

This function returns the word vector corresponding to the input word.

Word2Vec

Click here for ore information on the Word2Vec algorithm and the Gensim library used.

FastText

Click here for more information on the FastText algorithm and the Gensim library used.

GloVe

Click here for more information on the GloVe algorithm and Stanford library used.