The main concept of word embeddings is that every word in a language can be represented by a set of N real values (word vector), which are set to capture its meaning, its context and its relationship with other words. Click here for more information on Word Embedding.
In this microservice, we used French and English models trained on a dump of the French and English contents of Wikipedia dating back to 2019-07-01 and 2019-11-01.
Please select a word embedding function to test:
This function returns the most similar words. Positive words contribute positively towards the similarity, negative words negatively.
This function returns the best center words given the input context words.
This function returns the unmatched word, i.e. the word which matches the least with other words in the input list.
This function returns the word vector corresponding to the input word.