Computes the vector representation of each word in vocabulary (Java version).
Computes the vector representation of each word in vocabulary (Java version).
a JavaRDD of words
a Word2VecModel
Computes the vector representation of each word in vocabulary.
Computes the vector representation of each word in vocabulary.
an RDD of sentences, each sentence is expressed as an iterable collection of words
a Word2VecModel
Sets initial learning rate (default: 0.025).
Sets initial learning rate (default: 0.025).
Sets the maximum length (in words) of each sentence in the input data.
Sets the maximum length (in words) of each sentence in the input data.
Any sentence longer than this threshold will be divided into chunks of
up to maxSentenceLength
size (default: 1000)
Sets minCount, the minimum number of times a token must appear to be included in the word2vec model's vocabulary (default: 5).
Sets minCount, the minimum number of times a token must appear to be included in the word2vec model's vocabulary (default: 5).
Sets number of iterations (default: 1), which should be smaller than or equal to number of partitions.
Sets number of iterations (default: 1), which should be smaller than or equal to number of partitions.
Sets number of partitions (default: 1).
Sets number of partitions (default: 1). Use a small number for accuracy.
Sets random seed (default: a random long integer).
Sets random seed (default: a random long integer).
Sets vector size (default: 100).
Sets vector size (default: 100).
Sets the window of words (default: 5)
Sets the window of words (default: 5)
Word2Vec creates vector representation of words in a text corpus. The algorithm first constructs a vocabulary from the corpus and then learns vector representation of words in the vocabulary. The vector representation can be used as features in natural language processing and machine learning algorithms.
We used skip-gram model in our implementation and hierarchical softmax method to train the model. The variable names in the implementation matches the original C implementation.
For original C implementation, see https://code.google.com/p/word2vec/ For research papers, see Efficient Estimation of Word Representations in Vector Space and Distributed Representations of Words and Phrases and their Compositionality.