“In natural language processing, word embedding is a learned representation for text where words that have the same meaning have a similar representation. This term is used for the representation of words for text analysis with the goal of improved performance in the task. There are different models used for word embedding tasks, two most popular word embedding models are Word2Vec and Glove. First, we will understand the fundamentals of these models and then we will see how quickly they can be implemented in S2 using the deeplearning4j library. In the end, we will compare both of these models based on the various important parameters.”