Artificial Intelligence (AI): is a broad category that includes the study and development of computer systems that can copy intelligent human behaviour (adapted from Oxford Learners Dictionary)

Machine Learning (ML): is branch of AI which is uses statistical methods to imitate the way that humans learn (adapted from IBM)

Natural Language Processing (NLP): is branch of AI which focuses on training computers to interpret human text and spoken words (adapted from IBM)

Word Embeddings (WE): is one part of NLP where human words are converted into numerical representations (usually vectors) in order for computers to be able to understand them (adapted from Turing)

word2vec: is an NLP technique that is commonly used to generate word embeddings

What are Word Embeddings?

Building off of the definition above, word embeddings are one way that humans can represent language in a way that is legible to a machine. More specifically, word embeddings are an NLP approach that use vectors to store textual data in multiple dimensions; by existing in the multi-dimensional space of vectors, word embeddings are able to include important semantic information within a given numeric representation.

For example, if we are trying to answer a research question about how popular a term is on the web at a given time, we might use a simple word frequency analysis to count how many times the word “candidate” shows up in tweets during a defined electoral period. However, if we wanted to gain a more nuanced understanding of what kind of language, biases or attitudes contextualize the term, “candidate” in discourse, we would need to use a method like word embedding to encode meaning into our understanding of how people have talked about candidates over time. Instead of describing our text as a series of word counts, we would treat our text like coordinates in space, where similar words and concepts are closer to each other, and words that are different from each other are further away.

Comparing word frequency count and word embedding methods

What are Word Embeddings?

Building off of the definition above, word embeddings are one way that humans can represent language in a way that is legible to a machine. More specifically, word embeddings are an NLP approach that use vectors to store textual data in multiple dimensions; by existing in the multi-dimensional space of vectors, word embeddings are able to include important semantic information within a given numeric representation.

For example, if we are trying to answer a research question about how popular a term is on the web at a given time, we might use a simple word frequency analysis to count how many times the word “candidate” shows up in tweets during a defined electoral period. However, if we wanted to gain a more nuanced understanding of what kind of language, biases or attitudes contextualize the term, “candidate” in discourse, we would need to use a method like word embedding to encode meaning into our understanding of how people have talked about candidates over time. Instead of describing our text as a series of word counts, we would treat our text like coordinates in space, where similar words and concepts are closer to each other, and words that are different from each other are further away.

Comparing word frequency count and word embedding methods

For example, in the visualization above, a word frequency count returns the number of times the word “candidate” or “candidates” is used in a sample text corpus. When a word embedding is made from the same text corpus, we are able to map related concepts and phrases that are closely related to “candidate” as neighbours, while other words and phrases such as “experimental study” (which refers to the research paper in question, and not to candidates specifically) are further away.

Here is another example of how different, but related words might be represented in a word embedding:

Making a Word Embedding

So, how do word embeddings work? To make a word embedding, an input word gets compressed into a dense vector.

Creating a word embedding vector

The magic and mystery of the word embedding process is that often the vectors produced during the model embed qualities of a word or phrase that are not interpretable by humans. However, for our purposes, having the text in vector format is all we need. With this format, we can perform tests like cosine similarity and other kinds of operations. Such operations can reveal many different kinds of relationships between words, as we’ll examine a bit later.

Using word2vec

Word2vec is one NLP technique that is commonly used to generate word embeddings. More precisely, word2vec is an algorithmic learning tool rather than a specific neural net that is already trained. The example we will be working through today has been made using this tool.

The series of algorithms inside of the word2vec model try to describe and acquire parameters for a given word in terms of the text that appear immediately to the right and left in actual sentences. Essentially, it learns how to predict text.

Without going too deep into the algorithm, suffice it to say that it involves a two-step process:

  1. First, the input word gets compressed into a dense vector, as seen in the simplified diagram, “Creating a Word Embedding,” above.
  2. Second, the vector gets decoded into the set of context words. Keywords that appear within similar contexts will have similar vector representations in between steps.

Imagine that each word in a novel has its meaning determined by the ones that surround it in a limited window. For example, in Moby Dick’s first sentence, “me” is paired on either side by “Call” and “Ishmael.” After observing the windows around every word in the novel (or many novels), the computer will notice a pattern in which “me” falls between similar pairs of words to “her,” “him,” or “them.” Of course, the computer had gone through a similar process over the words “Call” and “Ishmael,” for which “me” is reciprocally part of their contexts. This chaining of signifiers to one another mirrors some of humanists’ most sophisticated interpretative frameworks of language.

The two main model architectures of word2vec are Continuous Bag of Words (CBOW) and Skip-Gram, which can be distinguished partly by their input and output during training.

CBOW takes the context words (for example, “Call”,“Ishmael”) as a single input and tries to predict the word of interest (“me”).

Skip-Gram does the opposite, taking a word of interest as its input (for example, “me”) and tries to learn how to predict its context words (“Call”,“Ishmael”).

In general, CBOW is is faster and does well with frequent words, while Skip-Gram potentially represents rare words better.

Since the word embedding is a vector, we are able perform tests like cosine similarity (which we’ll learn more about in a bit!) and other kinds of operations. Those operations can reveal many different kinds of relationships between words, as we shall see.

Bias and Language Models

You might already be piecing together that the encoding of meaning in word embeddings is entirely shaped by the patterns of language use captured in the training data. That is, what is included in a word embedding directly reflects the complex social and cultural biases of everyday human language - in fact, exploring how these biases function and change over time (as we will do later) is one of the most interesting ways to use word embeddings in social research.

It is simply impossible to have a bias-free language model (LM).

In LMs, bias is not a bug or a glitch, rather, it is an essential feature that is baked into the fundamental structure. For example, LMs are not outside of learning and absorbing the pejorative dimensions of language which in turn, can result in reproducing harmful correlations of meaning for words about race, class or gender (among others). When unchecked, these harms can be “amplified in downstream applications of word embeddings” (Arseniev-Koehler & Foster, 2020, p. 1).

Just like any other computational model, it is important to critically engage with the source and context of the training data. One way that Schiffers, Kern and Hienert suggest doing this is by using domain specific models (2023). Working with models that understand the nuances of your particular topic or field can better account for “specialized vocabulary and semantic relationships” that can help make applications of WE more effective.

Preparing for our Analysis

Word2vec Features

Here are a few features of the word2vec tool that we can use to customize our analysis:

Note: the script uses default value for each argument.

Some limitations of the word2vec Model

Let’s begin our analysis!

Excercise #1: Eggs, Sausages and Bacon

To begin, we are going to load a few packages that are necessary for our analysis. Please run the code cells below.

%pylab inline
matplotlib.style.use('ggplot')
%pylab is deprecated, use %matplotlib inline and import the required libraries.
Populating the interactive namespace from numpy and matplotlib
/opt/conda/lib/python3.10/site-packages/IPython/core/magics/pylab.py:162: UserWarning: pylab import has clobbered these variables: ['indices']
`%matplotlib` prevents importing * from pylab and numpy
  warn("pylab import has clobbered these variables: %s"  % clobbered +

Create a Document-Term Matrix (DTM) with a Few Pseudo-Texts

To start off, we’re going to create a mini dataframe based on the use of the words “eggs,” “sausages” and “bacon” found in three different novels: A, B and C.

# dataframes!
import pandas

# Construct dataframe with three novels each containing three words
columns = ['eggs','sausage','bacon']
indices = ['Novel A', 'Novel B', 'Novel C']
dtm = [[50,60,60],[90,10,10], [20,70,70]]
dtm_df = pandas.DataFrame(dtm, columns = columns, index = indices)

# Show dataframe
dtm_df
eggs sausage bacon
Novel A 50 60 60
Novel B 90 10 10
Novel C 20 70 70

Visualize

# Plot our points
scatter(dtm_df['eggs'], dtm_df['sausage'])

# Make the graph look good
xlim([0,100]), ylim([0,100])
xlabel('eggs'), ylabel('sausage')
(Text(0.5, 0, 'eggs'), Text(0, 0.5, 'sausage'))

Vectors

At a glance, a couple of points are lying closer to one another. We used the word frequencies of just two of the three words (eggs and sausages) in order to plot our texts in a two-dimensional plane. The term frequency “summaries” of Novel A & Novel C are pretty similar to one another: they both share a major concern with “sausage”, whereas Novel B seems to focus primarily on “eggs.”

This raises a question: how can we operationalize our intuition that the spatial distance presented here expresses topical similarity?

Cosine Similarity

The most common measurement of distance between points is their Cosine Similarity. Cosine similarity can operate on textual data that contain word vectors and allows us to identify how similar documents are to each other, for example. Cosine Similarity thus helps us understand how much content overlap a set of documents have with one another. For example, imagine that we were to draw an arrow from the origin of the graph - point (0,0) - to the dot representing each text. This arrow is called a vector.

Mathematically, this can be represented as:

Using our example above, we can see that the angle from (0,0) between Novel C and Novel A (orange triangle) is smaller than between Novel A and Novel B (navy triangle) or between Novel C and Novel B (both triangles together).

Because this similarity measurement uses the cosine of the angle between vectors, the magnitude is not a matter of concern (this feature is really helpful for text vectors that can often be really long!). Instead, the output of cosine similarity yields a value between 0 and 1 (we don’t have to work with something confusing like 18º!) that can be easily interpreted and compared - and thus we can also avoid the troubles associated with other dimensional distance measures such as Euclidean Distance.

Calculating Cosine Distance

# Although we want the Cosine Distance, it is mathematically simpler to calculate its opposite: Cosine Similarity

from sklearn.metrics.pairwise import cosine_similarity
# So we will subtract the similarities from 1

cos_sim = cosine_similarity(dtm_df)
# Make it a little easier to read by rounding the values

np.round(cos_sim, 2)

# Label the dataframe rows and columns with eggs, sausage and bacon

frame_2 = np.round(cos_sim, 2)
frame_2 = pandas.DataFrame(frame_2, columns = indices, index = indices)
frame_2
Novel A Novel B Novel C
Novel A 1.00 0.64 0.95
Novel B 0.64 1.00 0.35
Novel C 0.95 0.35 1.00

From this output table, which novels appear to be more similar to each other?

Excercise #2: Working with 18th Century Literature

# Compare the distance between novels

filelist = ['txtlab_Novel450_English/EN_1850_Hawthorne,Nathaniel_TheScarletLetter_Novel.txt',
            'txtlab_Novel450_English/EN_1851_Hawthorne,Nathaniel_TheHouseoftheSevenGables_Novel.txt',
            'txtlab_Novel450_English/EN_1920_Fitzgerald,FScott_ThisSideofParadise_Novel.txt',
            'txtlab_Novel450_English/EN_1922_Fitzgerald,FScott_TheBeautifulandtheDamned_Novel.txt',
            'txtlab_Novel450_English/EN_1811_Austen,Jane_SenseandSensibility_Novel.txt',
            'txtlab_Novel450_English/EN_1813_Austen,Jane_PrideandPrejudice_Novel.txt']

novel_names = ['Hawthorne: Scarlet Letter',
           'Hawthorne: Seven Gables',
           'Fitzgerald: This Side of Paradise',
           'Fitzgerald: Beautiful and the Damned',
           'Austen: Sense and Sensibility',
           'Austen: Pride and Prejudice']

text_list = []

for file in filelist:
    with open(file, 'r', encoding = 'utf-8') as myfile:
        text_list.append(myfile.read()) 

# Import the function CountVectorizer
from sklearn.feature_extraction.text import CountVectorizer

cv = CountVectorizer(stop_words = 'english', min_df = 3, binary=True)

novel_dtm = cv.fit_transform(text_list).toarray()
feature_list = cv.get_feature_names_out()
dtm_df_novel = pandas.DataFrame(novel_dtm, columns = feature_list, index = novel_names)
dtm_df_novel
abandoned abhorrence abide abilities ability able aboard abode abominable abominably ... yielding yonder york young younger youngest youth youthful youths zeal
Hawthorne: Scarlet Letter 1 1 1 1 1 1 1 1 0 0 ... 1 1 1 1 1 1 1 1 0 1
Hawthorne: Seven Gables 1 0 1 1 1 1 1 1 1 0 ... 1 1 0 1 1 1 1 1 0 1
Fitzgerald: This Side of Paradise 1 0 0 1 1 1 0 0 0 0 ... 0 0 1 1 1 0 1 1 1 0
Fitzgerald: Beautiful and the Damned 1 1 0 0 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 0 1 0
Austen: Sense and Sensibility 1 1 0 1 1 1 0 1 0 1 ... 0 0 0 1 1 1 1 1 0 1
Austen: Pride and Prejudice 0 1 1 1 0 1 0 1 1 1 ... 1 0 1 1 1 1 1 0 1 0

6 rows × 6993 columns

cos_sim_novel = cosine_similarity(dtm_df_novel)
cos_sim_novel = np.round(cos_sim_novel, 2)
cos_df = pandas.DataFrame(cos_sim_novel, columns = novel_names, index = novel_names)
cos_df
Hawthorne: Scarlet Letter Hawthorne: Seven Gables Fitzgerald: This Side of Paradise Fitzgerald: Beautiful and the Damned Austen: Sense and Sensibility Austen: Pride and Prejudice
Hawthorne: Scarlet Letter 1.00 0.80 0.69 0.75 0.67 0.67
Hawthorne: Seven Gables 0.80 1.00 0.74 0.80 0.70 0.70
Fitzgerald: This Side of Paradise 0.69 0.74 1.00 0.78 0.62 0.61
Fitzgerald: Beautiful and the Damned 0.75 0.80 0.78 1.00 0.69 0.68
Austen: Sense and Sensibility 0.67 0.70 0.62 0.69 1.00 0.81
Austen: Pride and Prejudice 0.67 0.70 0.61 0.68 0.81 1.00
# Visualizing differences

from sklearn.manifold import MDS

# Two components as we're plotting points in a two-dimensional plane
# "Precomputed" because we provide a distance matrix
# We will also specify `random_state` so that the plot is reproducible.

# Transform cosine similarity to cosine distance
cos_dist = 1 - cosine_similarity(dtm_df_novel)

mds = MDS(n_components=2, dissimilarity="precomputed", random_state=1, normalized_stress="auto")

pos = mds.fit_transform(cos_dist)  # shape (n_components, n_samples)
xs, ys = pos[:, 0], pos[:, 1]

for x, y, name in zip(xs, ys, novel_names):
    plt.scatter(x, y)
    plt.text(x, y, name)

plt.show()

The above method has a broad range of applications, such as unsupervised clustering. Common techniques include K-Means Clustering and Hierarchical Dendrograms. These attempt to identify groups of texts with shared content, based on these kinds of distance measures.

Here’s an example of a dendrogram based on these six novels:

from scipy.cluster.hierarchy import ward, dendrogram
linkage_matrix = ward(cos_dist)

dendrogram(linkage_matrix, orientation="right", labels=novel_names)

plt.tight_layout()  # fixes margins

plt.show()

Vector Semantics

We can also turn this logic on its head. Rather than produce vectors representing texts based on their words, we will produce vectors for the words based on their contexts.

# Turn our DTM sideways

dtm_df_novel.T.head()
Hawthorne: Scarlet Letter Hawthorne: Seven Gables Fitzgerald: This Side of Paradise Fitzgerald: Beautiful and the Damned Austen: Sense and Sensibility Austen: Pride and Prejudice
abandoned 1 1 1 1 1 0
abhorrence 1 0 0 1 1 1
abide 1 1 0 0 0 1
abilities 1 1 1 0 1 1
ability 1 1 1 1 1 0
# Find the Cosine Distances between pairs of word-vectors

cos_sim_words = cosine_similarity(dtm_df_novel.T)
# In readable format

np.round(cos_sim_words, 2)
array([[1.  , 0.67, 0.52, ..., 0.89, 0.52, 0.77],
       [0.67, 1.  , 0.58, ..., 0.5 , 0.58, 0.58],
       [0.52, 0.58, 1.  , ..., 0.58, 0.33, 0.67],
       ...,
       [0.89, 0.5 , 0.58, ..., 1.  , 0.29, 0.87],
       [0.52, 0.58, 0.33, ..., 0.29, 1.  , 0.  ],
       [0.77, 0.58, 0.67, ..., 0.87, 0.  , 1.  ]])

Theoretically you could visualize and cluster these as well - but this takes a lot of computational power!

We’ll thus turn to the machine learning version: word embeddings

# Clean-up memory
import sys

# These are the usual ipython objects, including this one you are creating
ipython_vars = ['In', 'Out', 'exit', 'quit', 'get_ipython', 'ipython_vars']

# Get a sorted list of the objects and their sizes
sorted([(x, sys.getsizeof(globals().get(x))) for x in dir() if not x.startswith('_') and x not in sys.modules and x not in ipython_vars], key=lambda x: x[1], reverse=True)

 
del cos_sim_words 
del dtm_df_novel 
del novel_dtm 
del feature_list

At this point you should restart your kernel if < 4 gb memory available

  • Do this by clicking on the “Kernel” menu and hitting “restart”

Exercise #3: Using word2vec with 150 English Novels

In this exercise, we’ll use an English-language subset from a dataset about novels created by Andrew Piper. Specifically we’ll look at 150 novels by British and American authors spanning the years 1771-1930. These texts reside on disk, each in a separate plaintext file. Metadata is contained in a spreadsheet distributed with the novel files.

Metadata Columns

  1. Filename: Name of file on disk
  2. ID: Unique ID in Piper corpus
  3. Language: Language of novel
  4. Date: Initial publication date
  5. Title: Title of novel
  6. Gender: Authorial gender
  7. Person: Textual perspective
  8. Length: Number of tokens in novel
# Data Wrangling

import os
import numpy as np
import pandas
from scipy.spatial.distance import cosine
from sklearn.metrics import pairwise
from sklearn.manifold import MDS, TSNE
# Natural Language Processing

import nltk
nltk.download('punkt')
from nltk.tokenize import word_tokenize, sent_tokenize
[nltk_data] Downloading package punkt to /home/jovyan/nltk_data...
[nltk_data]   Package punkt is already up-to-date!
# New libarary (not in Anaconda: gensim)
import gensim

Import Corpus

# Custom Tokenizer for Classroom Use

def fast_tokenize(text):
    
    # Get a list of punctuation marks
    from string import punctuation
    
    lower_case = text.lower()
    
    # Iterate through text removing punctuation characters
    no_punct = "".join([char for char in lower_case if char not in punctuation])
    
    # Split text over whitespace into list of words
    tokens = no_punct.split()
    
    return tokens

Import Metadata

# Import Metadata into Pandas Dataframe

meta_df = pandas.read_csv('resources/txtlab_Novel450_English.csv', encoding = 'utf-8')
# Check Metadata

meta_df.head()
filename id language date author title gender person length
0 EN_1771_Mackenzie,Henry_TheManofFeeling_Novel.txt 151 English 1771 Mackenzie,Henry TheManofFeeling male first 36458
1 EN_1771_Smollett,Tobias_TheExpedictionofHenryC... 152 English 1771 Smollett,Tobias TheExpedictionofHenryClinker male first 148261
2 EN_1778_Burney,Fanny_Evelina_Novel.txt 153 English 1778 Burney,Fanny Evelina female first 154168
3 EN_1782_Burney,Fanny_Cecilia_Novel.txt 154 English 1782 Burney,Fanny Cecilia female third 328981
4 EN_1786_Beckford,William_Vathek_Novel.txt 155 English 1786 Beckford,William Vathek male third 36077
# Set location of corpus folder

fiction_folder = 'txtlab_Novel450_English/'
import os

# Create an empty list to store the text of each novel
novel_list = []

# Iterate through filenames in 'fiction_folder'
for filename in os.listdir(fiction_folder):
    file_path = os.path.join(fiction_folder, filename)
    
    try:
        # Attempt to read the novel text as a string using utf-8 encoding
        with open(file_path, 'r', encoding='utf-8') as file_in:
            this_novel = file_in.read()
        
        # Add novel text as a single string to the master list
        novel_list.append(this_novel)
    
    except UnicodeDecodeError as e:
        # Handle encoding errors by skipping the problematic file
        print(f"UnicodeDecodeError: Unable to read '{filename}' - Skipping this file.")
        continue

# Now 'novel_list' contains the text of all readable novels.
# Inspect first item in novel_list

novel_list[0][:500]
"\nChapter I\n\nAt half-past six o'clock on Sunday night Barnabas came out of his bedroom. The Thayer house was only one story high, and there were no chambers. A number of little bedrooms were clustered around the three square rooms—the north and south parlors, and the great kitchen.\n\nBarnabas walked out of his bedroom straight into the kitchen where the other members of the family were. They sat before the hearth fire in a semi-circle—Caleb Thayer, his wife Deborah, his son Ephraim, and his daught"

Pre-Processing

word2Vec learns about the relationships among words by observing them in context. This means that we want to split our texts into word-units. However, we want to maintain sentence boundaries as well, since the last word of the previous sentence might skew the meaning of the next sentence.

Since novels were imported as single strings, we’ll first need to divide them into sentences, and second, we’ll split each sentence into its own list of words.

# Split each novel into sentences

sentences = [sentence for novel in novel_list for sentence in sent_tokenize(novel)]

del novel_list
# Inspect first sentence

sentences[0]
"\nChapter I\n\nAt half-past six o'clock on Sunday night Barnabas came out of his bedroom."
# Split each sentence into tokens

sentences = [fast_tokenize(sentence) for sentence in sentences]
# Remove any sentences that contain zero tokens

sentences = [sentence for sentence in sentences if sentence != []]
# Inspect first sentence

sentences[0]
['chapter',
 'i',
 'at',
 'halfpast',
 'six',
 'oclock',
 'on',
 'sunday',
 'night',
 'barnabas',
 'came',
 'out',
 'of',
 'his',
 'bedroom']

Training

To train the model, we would run:

# Train word2vec model from txtLab corpus

model = gensim.models.Word2Vec(sentences, vector_size=100, window=5, \
                               min_count=25, sg=1, alpha=0.025, epochs=5, batch_words=10000)

However, this is both slow and memory intensive, so instead we will load pre-trained data.

#unload moster dataframe
del sentences

# Load pre-trained model from harddisk
model = gensim.models.KeyedVectors.load_word2vec_format('resources/word2vec.txtlab_Novel150_English.txt')

Embeddings

# Return dense word vector

model.get_vector('whale')
array([-0.5510711 , -0.11189298, -0.04959059, -0.05850497,  0.28790763,
       -0.80342406, -0.07215538,  0.2721556 , -0.24760762, -0.4051926 ,
        0.01354405, -0.7165052 ,  0.17665575,  0.40048674, -0.19900815,
        0.20170024,  0.26689592, -0.07850418,  0.41761532, -0.465634  ,
       -0.02264982,  0.03582832, -0.3957834 , -0.3504738 , -0.10894601,
       -0.02075713, -0.08951025,  0.63399905, -0.22439238, -0.04571422,
        0.02540515, -0.09852695, -0.18284857, -0.09806305,  0.06884101,
        0.20008531,  0.617396  , -0.15709312, -0.6067674 ,  0.5979467 ,
       -0.3323625 , -0.21599118,  0.1550317 , -0.11773711,  0.72263384,
       -0.4205337 ,  0.01987723, -0.0929396 ,  0.01469748,  0.26177695,
        0.05429281,  0.33651814,  0.41468495,  0.44761443, -0.34722948,
        0.4060455 , -0.00145013,  0.11014426, -0.25099453,  0.12387881,
       -0.5413976 ,  0.25108388,  0.34349084, -0.00202278,  0.05355506,
        0.02677856, -0.05316461,  0.62082773, -0.16097702,  0.2687234 ,
       -0.41135943,  0.7923443 , -0.20083408,  0.00829648,  0.29228744,
       -0.08214567,  0.6325427 , -0.2888334 , -0.18535183,  0.6230707 ,
       -0.23328477,  0.18710871, -0.45419276,  0.26097402, -0.32497615,
        0.06670722,  0.08160412,  0.43276155,  0.34504986,  0.44552633,
        0.61302644, -0.09112564,  0.1026976 ,  0.08310616,  0.33132783,
        0.23587197, -0.03966643,  0.0349041 ,  0.06835472,  0.00227987],
      dtype=float32)

Vector-Space Operations

Similarity

Since words are represented as dense vectors, we can ask how similiar words’ meanings are based on their cosine similarity (essentially how much they overlap). gensim has a few out-of-the-box functions that enable different kinds of comparisons.

# Find cosine distance between two given word vectors

model.similarity('pride','prejudice')
0.591623
# Find nearest word vectors by cosine distance

model.most_similar('pride')
[('unworthiness', 0.7083385586738586),
 ('vanity', 0.70763099193573),
 ('hardihood', 0.7038336396217346),
 ('heroism', 0.7029452919960022),
 ('selfishness', 0.6984862089157104),
 ('egotism', 0.6983219385147095),
 ('unselfishness', 0.6943386793136597),
 ('arrogance', 0.6935237646102905),
 ('selfconceit', 0.690157413482666),
 ('timidity', 0.69000643491745)]
# Given a list of words, we can ask which doesn't belong

# Finds mean vector of words in list
# and identifies the word further from that mean

model.doesnt_match(['pride','prejudice', 'whale'])
'whale'

Multiple Valences

A word embedding may encode both primary and secondary meanings that are both present at the same time. In order to identify secondary meanings in a word, we can subtract the vectors of primary (or simply unwanted) meanings. For example, we may wish to remove the sense of river bank from the word bank. This would be written mathetmatically as RIVER - BANK, which in gensim’s interface lists RIVER as a positive meaning and BANK as a negative one.

# Get most similar words to BANK, in order to get a sense for its primary meaning

model.most_similar('bank')
[('river', 0.7111629843711853),
 ('creek', 0.6831797361373901),
 ('shore', 0.6765630841255188),
 ('cove', 0.6756646633148193),
 ('ferryboat', 0.6710000038146973),
 ('thames', 0.6699836850166321),
 ('margin', 0.669341504573822),
 ('banks', 0.6658000946044922),
 ('hanger', 0.6630111336708069),
 ('wharf', 0.6603569984436035)]
# Remove the sense of "river bank" from "bank" and see what is left

model.most_similar(positive=['bank'], negative=['river'])
[('unpaid', 0.37325167655944824),
 ('fee', 0.3700193762779236),
 ('cheque', 0.35955584049224854),
 ('embezzlement', 0.3573637008666992),
 ('deposit', 0.35101866722106934),
 ('salary', 0.3505880534648895),
 ('cash', 0.3501802086830139),
 ('mortgage', 0.3443868160247803),
 ('cowperwoods', 0.344247430562973),
 ('purchase', 0.3422768712043762)]

Analogy

Analogies are rendered as simple mathematical operations in vector space. For example, the canonic word2vec analogy MAN is to KING as WOMAN is to ?? is rendered as KING - MAN + WOMAN. In the gensim interface, we designate KING and WOMAN as positive terms and MAN as a negative term, since it is subtracted from those.

# Get most similar words to KING, in order
# to get a sense for its primary meaning

model.most_similar('king')
[('duke', 0.795354425907135),
 ('prince', 0.7459726929664612),
 ('otho', 0.7265864610671997),
 ('governor', 0.7148163318634033),
 ('kings', 0.6957926154136658),
 ('justicer', 0.6933550238609314),
 ('commanderinchief', 0.6793581247329712),
 ('minister', 0.6772224307060242),
 ('emperor', 0.6694881916046143),
 ('wizard', 0.668773353099823)]
# The canonic word2vec analogy: King - Man + Woman -> Queen

model.most_similar(positive=['woman', 'king'], negative=['man'])
[('queen', 0.7486673593521118),
 ('princess', 0.7174912095069885),
 ('nun', 0.6718207597732544),
 ('duchess', 0.6638779044151306),
 ('dunstan', 0.6449073553085327),
 ('helena', 0.6422445774078369),
 ('duke', 0.6287195682525635),
 ('ruritania', 0.6268595457077026),
 ('bride', 0.6220378875732422),
 ('lomellino', 0.6219776272773743)]

Gendered Vectors

Note that this method uses vector projection, whereas Schmidt had used rejection.

# Feminine Vector

model.most_similar(positive=['she','her','hers','herself'], negative=['he','him','his','himself'])
[('louisa', 0.5036913156509399),
 ('helens', 0.45718511939048767),
 ('fragile', 0.4379361867904663),
 ('maiden', 0.4373876452445984),
 ('rosabella', 0.4361468553543091),
 ('jane', 0.43083661794662476),
 ('anne', 0.4306352138519287),
 ('charms', 0.43060559034347534),
 ('elizabeth', 0.429295152425766),
 ('womanly', 0.42321687936782837)]
# Masculine Vector

model.most_similar(positive=['he','him','his','himself'], negative=['she','her','hers','herself'])
[('mahbub', 0.42675507068634033),
 ('buck', 0.40121230483055115),
 ('osterman', 0.39523470401763916),
 ('bicycle', 0.3810529410839081),
 ('bill', 0.38029444217681885),
 ('policeman', 0.3739871680736542),
 ('pipe', 0.36621248722076416),
 ('sergeant', 0.3662109673023224),
 ('foreman', 0.35990503430366516),
 ('bonneville', 0.3561386466026306)]

Visualization

# Dictionary of words in model

model.key_to_index
{'': 0,
 'the': 1,
 'and': 2,
 'of': 3,
 'to': 4,
 'a': 5,
 'i': 6,
 'in': 7,
 'was': 8,
 'he': 9,
 'that': 10,
 'it': 11,
 'her': 12,
 'his': 13,
 'you': 14,
 'she': 15,
 'had': 16,
 'with': 17,
 'as': 18,
 'for': 19,
 'not': 20,
 'but': 21,
 'at': 22,
 'be': 23,
 'is': 24,
 'on': 25,
 'him': 26,
 'my': 27,
 'have': 28,
 'me': 29,
 'said': 30,
 'all': 31,
 'which': 32,
 'by': 33,
 'so': 34,
 'this': 35,
 'from': 36,
 'they': 37,
 'were': 38,
 'would': 39,
 'no': 40,
 'one': 41,
 'if': 42,
 'there': 43,
 'what': 44,
 'been': 45,
 'an': 46,
 'when': 47,
 'or': 48,
 'could': 49,
 'who': 50,
 'them': 51,
 'do': 52,
 'we': 53,
 'their': 54,
 'out': 55,
 'up': 56,
 'will': 57,
 'are': 58,
 'very': 59,
 'now': 60,
 'your': 61,
 'more': 62,
 'then': 63,
 'mr': 64,
 'little': 65,
 'into': 66,
 'about': 67,
 'some': 68,
 'like': 69,
 'than': 70,
 'did': 71,
 'man': 72,
 'time': 73,
 'any': 74,
 'know': 75,
 'only': 76,
 'see': 77,
 'its': 78,
 'never': 79,
 'well': 80,
 'should': 81,
 'upon': 82,
 'how': 83,
 'has': 84,
 'much': 85,
 'such': 86,
 'before': 87,
 'mrs': 88,
 'must': 89,
 'over': 90,
 'after': 91,
 'own': 92,
 'down': 93,
 'come': 94,
 'good': 95,
 'think': 96,
 'made': 97,
 'can': 98,
 'other': 99,
 'go': 100,
 'old': 101,
 'say': 102,
 'might': 103,
 'am': 104,
 'too': 105,
 'again': 106,
 'thought': 107,
 'himself': 108,
 'great': 109,
 'way': 110,
 'two': 111,
 'dont': 112,
 'came': 113,
 'our': 114,
 'long': 115,
 'first': 116,
 'here': 117,
 'may': 118,
 'day': 119,
 'back': 120,
 'us': 121,
 'even': 122,
 'eyes': 123,
 'where': 124,
 'life': 125,
 'went': 126,
 'miss': 127,
 'make': 128,
 'these': 129,
 'just': 130,
 'every': 131,
 'away': 132,
 'last': 133,
 'shall': 134,
 'hand': 135,
 'young': 136,
 'still': 137,
 'yet': 138,
 'most': 139,
 'without': 140,
 'though': 141,
 'being': 142,
 'nothing': 143,
 'ever': 144,
 'lady': 145,
 'face': 146,
 'looked': 147,
 'seemed': 148,
 'look': 149,
 'through': 150,
 'house': 151,
 'tell': 152,
 'those': 153,
 'while': 154,
 'herself': 155,
 'always': 156,
 'take': 157,
 'once': 158,
 'men': 159,
 'father': 160,
 'why': 161,
 'mind': 162,
 'let': 163,
 'love': 164,
 'off': 165,
 'head': 166,
 'saw': 167,
 'get': 168,
 'mother': 169,
 'moment': 170,
 'oh': 171,
 'sir': 172,
 'knew': 173,
 'many': 174,
 'room': 175,
 'heart': 176,
 'quite': 177,
 'place': 178,
 'poor': 179,
 'something': 180,
 'left': 181,
 'right': 182,
 'things': 183,
 'felt': 184,
 'going': 185,
 'thing': 186,
 'took': 187,
 'people': 188,
 'found': 189,
 'heard': 190,
 'dear': 191,
 'put': 192,
 'night': 193,
 'better': 194,
 'under': 195,
 'another': 196,
 'got': 197,
 'yes': 198,
 'give': 199,
 'same': 200,
 'home': 201,
 'asked': 202,
 'enough': 203,
 'door': 204,
 'told': 205,
 'new': 206,
 'done': 207,
 'woman': 208,
 'world': 209,
 'because': 210,
 'against': 211,
 'voice': 212,
 'myself': 213,
 'rather': 214,
 'want': 215,
 'few': 216,
 'between': 217,
 'hands': 218,
 'almost': 219,
 'seen': 220,
 'perhaps': 221,
 'till': 222,
 'however': 223,
 'having': 224,
 'morning': 225,
 'turned': 226,
 'far': 227,
 'whom': 228,
 'soon': 229,
 'indeed': 230,
 'course': 231,
 'three': 232,
 'round': 233,
 'stood': 234,
 'words': 235,
 'ill': 236,
 'side': 237,
 'sure': 238,
 'years': 239,
 'im': 240,
 'anything': 241,
 'friend': 242,
 'looking': 243,
 'both': 244,
 'began': 245,
 'each': 246,
 'whole': 247,
 'hope': 248,
 'part': 249,
 'work': 250,
 'kind': 251,
 'find': 252,
 'gave': 253,
 'word': 254,
 'days': 255,
 'believe': 256,
 'half': 257,
 'sat': 258,
 'since': 259,
 'hear': 260,
 'gone': 261,
 'wife': 262,
 'speak': 263,
 'light': 264,
 'name': 265,
 'white': 266,
 'nor': 267,
 'answered': 268,
 'cried': 269,
 'lord': 270,
 'really': 271,
 'best': 272,
 'next': 273,
 'among': 274,
 'end': 275,
 'called': 276,
 'wish': 277,
 'girl': 278,
 'o': 279,
 'feel': 280,
 'set': 281,
 'together': 282,
 'brought': 283,
 'child': 284,
 'full': 285,
 'alone': 286,
 'keep': 287,
 'passed': 288,
 'leave': 289,
 'whose': 290,
 'replied': 291,
 'least': 292,
 'money': 293,
 'matter': 294,
 'god': 295,
 'air': 296,
 'does': 297,
 'often': 298,
 'family': 299,
 'cant': 300,
 'talk': 301,
 'evening': 302,
 'help': 303,
 'cannot': 304,
 'manner': 305,
 'less': 306,
 'mean': 307,
 'present': 308,
 'hour': 309,
 'letter': 310,
 'towards': 311,
 'friends': 312,
 'taken': 313,
 'rose': 314,
 'feeling': 315,
 'small': 316,
 'boy': 317,
 'business': 318,
 'open': 319,
 'coming': 320,
 'care': 321,
 'behind': 322,
 'spoke': 323,
 'read': 324,
 'also': 325,
 'john': 326,
 'thou': 327,
 'ask': 328,
 'thats': 329,
 'husband': 330,
 'lay': 331,
 'whether': 332,
 'happy': 333,
 'true': 334,
 'children': 335,
 'answer': 336,
 'dark': 337,
 'black': 338,
 'call': 339,
 'others': 340,
 'rest': 341,
 'along': 342,
 'suppose': 343,
 'given': 344,
 'death': 345,
 'suddenly': 346,
 'near': 347,
 'says': 348,
 'women': 349,
 'already': 350,
 'hard': 351,
 'sort': 352,
 'returned': 353,
 'certain': 354,
 'within': 355,
 'margaret': 356,
 'chapter': 357,
 'table': 358,
 'used': 359,
 'themselves': 360,
 'everything': 361,
 'else': 362,
 'kept': 363,
 'person': 364,
 'high': 365,
 'times': 366,
 'sense': 367,
 'ive': 368,
 'fire': 369,
 'either': 370,
 'country': 371,
 'large': 372,
 'until': 373,
 'making': 374,
 'known': 375,
 'certainly': 376,
 'wanted': 377,
 'held': 378,
 'master': 379,
 'power': 380,
 'yourself': 381,
 'strange': 382,
 'lost': 383,
 'wont': 384,
 'sister': 385,
 'remember': 386,
 'street': 387,
 'pretty': 388,
 'nature': 389,
 'feet': 390,
 'continued': 391,
 'idea': 392,
 'itself': 393,
 'strong': 394,
 'question': 395,
 'bed': 396,
 'doubt': 397,
 'live': 398,
 'truth': 399,
 'ought': 400,
 'fear': 401,
 'didnt': 402,
 'case': 403,
 'dead': 404,
 'reason': 405,
 'understand': 406,
 'fell': 407,
 'window': 408,
 'arms': 409,
 'possible': 410,
 'short': 411,
 'fact': 412,
 'sometimes': 413,
 'thus': 414,
 'during': 415,
 'saying': 416,
 'son': 417,
 'brother': 418,
 'hair': 419,
 'above': 420,
 'eye': 421,
 'return': 422,
 'met': 423,
 'soul': 424,
 'walked': 425,
 'silence': 426,
 'daughter': 427,
 'beautiful': 428,
 'turn': 429,
 'close': 430,
 'fine': 431,
 'aunt': 432,
 'point': 433,
 'gentleman': 434,
 'added': 435,
 'glad': 436,
 'appeared': 437,
 'water': 438,
 'sight': 439,
 'year': 440,
 'tone': 441,
 'exclaimed': 442,
 'afraid': 443,
 'subject': 444,
 'arm': 445,
 'bad': 446,
 'low': 447,
 'across': 448,
 'hours': 449,
 'means': 450,
 'use': 451,
 'became': 452,
 'tom': 453,
 'around': 454,
 'state': 455,
 'doing': 456,
 'sent': 457,
 'late': 458,
 'ready': 459,
 'tried': 460,
 'taking': 461,
 'thinking': 462,
 'pleasure': 463,
 'longer': 464,
 'theres': 465,
 'london': 466,
 'red': 467,
 'cold': 468,
 'neither': 469,
 'past': 470,
 'change': 471,
 'smile': 472,
 'order': 473,
 'followed': 474,
 'town': 475,
 'interest': 476,
 'lips': 477,
 'hes': 478,
 'thee': 479,
 'ye': 480,
 'quiet': 481,
 'ah': 482,
 'hardly': 483,
 'body': 484,
 'become': 485,
 'deep': 486,
 'five': 487,
 'need': 488,
 'opened': 489,
 'entered': 490,
 'forward': 491,
 'fellow': 492,
 'spirit': 493,
 'bring': 494,
 'able': 495,
 'deal': 496,
 'four': 497,
 'show': 498,
 'sound': 499,
 'hold': 500,
 'walk': 501,
 'thy': 502,
 'loved': 503,
 'therefore': 504,
 'mine': 505,
 'conversation': 506,
 'minutes': 507,
 'none': 508,
 'ten': 509,
 'party': 510,
 'dinner': 511,
 'tears': 512,
 'road': 513,
 'thoughts': 514,
 'character': 515,
 'hundred': 516,
 'meet': 517,
 'try': 518,
 'silent': 519,
 'girls': 520,
 'trouble': 521,
 'youre': 522,
 'fathers': 523,
 'received': 524,
 'human': 525,
 'fair': 526,
 'ground': 527,
 'beauty': 528,
 'married': 529,
 'general': 530,
 'usual': 531,
 'bear': 532,
 'church': 533,
 'different': 534,
 'wonder': 535,
 'second': 536,
 'early': 537,
 'laura': 538,
 'feelings': 539,
 'stand': 540,
 'seeing': 541,
 'book': 542,
 'chair': 543,
 'beyond': 544,
 'real': 545,
 'except': 546,
 'talking': 547,
 'length': 548,
 'wrong': 549,
 'seem': 550,
 'week': 551,
 'stay': 552,
 'mary': 553,
 'standing': 554,
 'company': 555,
 'seems': 556,
 'thousand': 557,
 'laid': 558,
 'english': 559,
 'chance': 560,
 'de': 561,
 'drew': 562,
 'presence': 563,
 'sitting': 564,
 'nearly': 565,
 'sleep': 566,
 'several': 567,
 'scarcely': 568,
 'attention': 569,
 'ran': 570,
 'write': 571,
 'horse': 572,
 'ago': 573,
 'whatever': 574,
 'natural': 575,
 'struck': 576,
 'george': 577,
 'doctor': 578,
 'led': 579,
 'clear': 580,
 'ladies': 581,
 'slowly': 582,
 'account': 583,
 'stopped': 584,
 'living': 585,
 'wild': 586,
 'happened': 587,
 'sit': 588,
 'wished': 589,
 'cause': 590,
 'run': 591,
 'wait': 592,
 'earth': 593,
 'dr': 594,
 'story': 595,
 'getting': 596,
 'lived': 597,
 'speaking': 598,
 'edward': 599,
 'knows': 600,
 'please': 601,
 'blue': 602,
 'laughed': 603,
 'afternoon': 604,
 'uncle': 605,
 'immediately': 606,
 'reached': 607,
 'comes': 608,
 'duty': 609,
 'mouth': 610,
 'city': 611,
 'free': 612,
 'sudden': 613,
 'heavy': 614,
 'sun': 615,
 'robert': 616,
 'expression': 617,
 'secret': 618,
 'pay': 619,
 'common': 620,
 'sea': 621,
 'impossible': 622,
 'society': 623,
 'turning': 624,
 'purpose': 625,
 'play': 626,
 'save': 627,
 'effect': 628,
 'bright': 629,
 'beside': 630,
 'moved': 631,
 'grey': 632,
 'blood': 633,
 'forth': 634,
 'figure': 635,
 'form': 636,
 'hall': 637,
 'sweet': 638,
 'meant': 639,
 'months': 640,
 'heaven': 641,
 'age': 642,
 'sorry': 643,
 'marry': 644,
 'front': 645,
 'appearance': 646,
 'mothers': 647,
 'wouldnt': 648,
 'opinion': 649,
 'position': 650,
 'trust': 651,
 'pale': 652,
 'youth': 653,
 'caught': 654,
 'expected': 655,
 'visit': 656,
 'ones': 657,
 'honour': 658,
 'elizabeth': 659,
 'happiness': 660,
 'corner': 661,
 'remained': 662,
 'madame': 663,
 'dress': 664,
 'marriage': 665,
 'instead': 666,
 'big': 667,
 'further': 668,
 'thank': 669,
 'captain': 670,
 'waiting': 671,
 'youll': 672,
 'desire': 673,
 'sake': 674,
 'tomorrow': 675,
 'carried': 676,
 'grew': 677,
 'broken': 678,
 'view': 679,
 'pass': 680,
 'observed': 681,
 'likely': 682,
 'minute': 683,
 'strength': 684,
 'send': 685,
 'cousin': 686,
 'six': 687,
 'pain': 688,
 'cut': 689,
 'england': 690,
 'necessary': 691,
 'st': 692,
 'liked': 693,
 'boys': 694,
 'trying': 695,
 'easy': 696,
 'horses': 697,
 'today': 698,
 'mans': 699,
 'besides': 700,
 'although': 701,
 'green': 702,
 'scene': 703,
 'bit': 704,
 'act': 705,
 'pleasant': 706,
 'instant': 707,
 'comfort': 708,
 'circumstances': 709,
 'books': 710,
 'wind': 711,
 'object': 712,
 'rich': 713,
 'forget': 714,
 'leaving': 715,
 'fancy': 716,
 'step': 717,
 'spoken': 718,
 'king': 719,
 'isnt': 720,
 'line': 721,
 'garden': 722,
 'future': 723,
 'perfectly': 724,
 'probably': 725,
 'started': 726,
 'distance': 727,
 'foot': 728,
 'paper': 729,
 'giving': 730,
 'helen': 731,
 'philip': 732,
 'carriage': 733,
 'charles': 734,
 'laugh': 735,
 'die': 736,
 'talked': 737,
 'shook': 738,
 'couldnt': 739,
 'river': 740,
 'joy': 741,
 'anxious': 742,
 'amy': 743,
 'watch': 744,
 'worse': 745,
 'fall': 746,
 'art': 747,
 'surely': 748,
 'land': 749,
 'countenance': 750,
 'trees': 751,
 'moments': 752,
 'note': 753,
 'looks': 754,
 'occasion': 755,
 'letters': 756,
 'raised': 757,
 'mere': 758,
 'knowledge': 759,
 'hat': 760,
 'toward': 761,
 'office': 762,
 'floor': 763,
 'pleased': 764,
 'presently': 765,
 'beginning': 766,
 'soft': 767,
 'showed': 768,
 'makes': 769,
 'smiled': 770,
 'surprise': 771,
 'smiling': 772,
 'merely': 773,
 'cry': 774,
 'tea': 775,
 'entirely': 776,
 'repeated': 777,
 'determined': 778,
 'influence': 779,
 'breath': 780,
 'colonel': 781,
 'shes': 782,
 'music': 783,
 'companion': 784,
 'exactly': 785,
 'changed': 786,
 'sisters': 787,
 'danger': 788,
 'angry': 789,
 'judge': 790,
 'fixed': 791,
 'worth': 792,
 'died': 793,
 'closed': 794,
 'promise': 795,
 'grave': 796,
 'written': 797,
 'creature': 798,
 'prince': 799,
 'id': 800,
 'quick': 801,
 'broke': 802,
 'perfect': 803,
 'shut': 804,
 'public': 805,
 'stop': 806,
 'somewhat': 807,
 'dare': 808,
 'oclock': 809,
 'service': 810,
 'tonight': 811,
 'watched': 812,
 'affection': 813,
 'particular': 814,
 'terrible': 815,
 'news': 816,
 'quickly': 817,
 'fanny': 818,
 'guy': 819,
 'later': 820,
 'arrived': 821,
 'glance': 822,
 'phineas': 823,
 'forgotten': 824,
 'ears': 825,
 'aware': 826,
 'knowing': 827,
 'reply': 828,
 'threw': 829,
 'straight': 830,
 'youve': 831,
 'private': 832,
 'tired': 833,
 'spite': 834,
 'placed': 835,
 'passion': 836,
 'fresh': 837,
 'laughing': 838,
 'outside': 839,
 'spirits': 840,
 'school': 841,
 'evil': 842,
 'court': 843,
 'peace': 844,
 'force': 845,
 'altogether': 846,
 'follow': 847,
 'remembered': 848,
 'fast': 849,
 'latter': 850,
 'pity': 851,
 'glass': 852,
 'situation': 853,
 'steps': 854,
 'believed': 855,
 'reading': 856,
 'tall': 857,
 'em': 858,
 'paid': 859,
 'wall': 860,
 'notice': 861,
 'seat': 862,
 'pride': 863,
 'especially': 864,
 'filled': 865,
 'fingers': 866,
 'shoulder': 867,
 'quietly': 868,
 'noble': 869,
 'nobody': 870,
 'neck': 871,
 'understood': 872,
 'simple': 873,
 'considered': 874,
 'weeks': 875,
 'french': 876,
 'summer': 877,
 'walking': 878,
 'village': 879,
 'speech': 880,
 'yours': 881,
 'opportunity': 882,
 'aint': 883,
 'hell': 884,
 'afterwards': 885,
 'touch': 886,
 'gentlemen': 887,
 'unless': 888,
 'warm': 889,
 'sky': 890,
 'waited': 891,
 'whispered': 892,
 'beneath': 893,
 'listened': 894,
 'sad': 895,
 'conduct': 896,
 'obliged': 897,
 'fortune': 898,
 'gentle': 899,
 'following': 900,
 'twenty': 901,
 'gold': 902,
 'former': 903,
 'surprised': 904,
 'law': 905,
 'watching': 906,
 'pray': 907,
 'allowed': 908,
 'servant': 909,
 'meeting': 910,
 'listen': 911,
 'grand': 912,
 'below': 913,
 'effort': 914,
 'respect': 915,
 'single': 916,
 'everybody': 917,
 'hung': 918,
 'easily': 919,
 'safe': 920,
 'proud': 921,
 'plain': 922,
 'carry': 923,
 'learned': 924,
 'memory': 925,
 'vain': 926,
 'nice': 927,
 'passing': 928,
 'lucy': 929,
 'handsome': 930,
 'wrote': 931,
 'acquaintance': 932,
 'mamma': 933,
 'picture': 934,
 'darkness': 935,
 'holding': 936,
 'dropped': 937,
 'breakfast': 938,
 'opposite': 939,
 'flowers': 940,
 'begin': 941,
 'simply': 942,
 'ways': 943,
 'settled': 944,
 'train': 945,
 'kindness': 946,
 'duke': 947,
 'honest': 948,
 'anne': 949,
 'fond': 950,
 'offered': 951,
 'expect': 952,
 'confidence': 953,
 'burst': 954,
 'telling': 955,
 'whats': 956,
 'gate': 957,
 'running': 958,
 'henry': 959,
 'fit': 960,
 'hill': 961,
 'conscious': 962,
 'faith': 963,
 'taste': 964,
 'evidently': 965,
 'difficulty': 966,
 'lie': 967,
 'putting': 968,
 'touched': 969,
 'stranger': 970,
 'top': 971,
 'sharp': 972,
 'mistress': 973,
 'bent': 974,
 'experience': 975,
 'serious': 976,
 'lives': 977,
 'fallen': 978,
 'clothes': 979,
 'break': 980,
 'bound': 981,
 'engaged': 982,
 'windows': 983,
 'supposed': 984,
 'learn': 985,
 'spent': 986,
 'major': 987,
 'brown': 988,
 'hot': 989,
 'regard': 990,
 'offer': 991,
 'eat': 992,
 'drawingroom': 993,
 'greater': 994,
 'dog': 995,
 'isabel': 996,
 'paused': 997,
 'rate': 998,
 'piece': 999,
 ...}
# Visualizing the whole vocabulary would make it hard to read

len(model.key_to_index)
20865
# For interpretability, we'll select words that already have a semantic relation

her_tokens = [token for token,weight in model.most_similar(positive=['she','her','hers','herself'], \
                                                       negative=['he','him','his','himself'], topn=50)]
# Inspect list

her_tokens[:15]
['louisa',
 'helens',
 'fragile',
 'maiden',
 'rosabella',
 'jane',
 'anne',
 'charms',
 'elizabeth',
 'womanly',
 'fanny',
 'sex',
 'portmans',
 'lovable',
 'lucy']
# Get the vector for each sampled word

vectors = [model.get_vector(word) for word in her_tokens] 
# Calculate distances among texts in vector space

dist_matrix = pairwise.pairwise_distances(vectors, metric='cosine')
dist_matrix
array([[0.0000000e+00, 4.2728323e-01, 6.4128482e-01, ..., 3.4401667e-01,
        6.0880047e-01, 4.3875921e-01],
       [4.2728323e-01, 1.1920929e-07, 6.5647769e-01, ..., 5.0295484e-01,
        3.7134373e-01, 6.0509455e-01],
       [6.4128482e-01, 6.5647769e-01, 5.9604645e-08, ..., 6.4673072e-01,
        5.1537478e-01, 7.2210795e-01],
       ...,
       [3.4401679e-01, 5.0295490e-01, 6.4673066e-01, ..., 0.0000000e+00,
        5.2231884e-01, 6.0237920e-01],
       [6.0880047e-01, 3.7134373e-01, 5.1537478e-01, ..., 5.2231884e-01,
        5.9604645e-08, 6.5933627e-01],
       [4.3875921e-01, 6.0509455e-01, 7.2210795e-01, ..., 6.0237920e-01,
        6.5933627e-01, 2.9802322e-07]], dtype=float32)
# Multi-Dimensional Scaling (Project vectors into 2-D)

mds = MDS(n_components = 2, dissimilarity='precomputed')
embeddings = mds.fit_transform(dist_matrix)
/opt/conda/lib/python3.10/site-packages/sklearn/manifold/_mds.py:299: FutureWarning: The default value of `normalized_stress` will change to `'auto'` in version 1.4. To suppress this warning, manually set the value of `normalized_stress`.
  warnings.warn(
# Make a pretty graph
%pylab inline
matplotlib.style.use('ggplot')

_, ax = subplots(figsize=(10,10))
ax.scatter(embeddings[:,0], embeddings[:,1], alpha=0)
for i in range(len(vectors)):
    ax.annotate(her_tokens[i], ((embeddings[i,0], embeddings[i,1])))
%pylab is deprecated, use %matplotlib inline and import the required libraries.
Populating the interactive namespace from numpy and matplotlib

# For comparison, here is the same graph using a masculine-pronoun vector

his_tokens = [token for token,weight in model.most_similar(positive=['he','him','his','himself'], \
                                                       negative=['she','her','hers','herself'], topn=50)]
vectors = [model.get_vector(word) for word in his_tokens]
dist_matrix = pairwise.pairwise_distances(vectors, metric='cosine')
mds = MDS(n_components = 2, dissimilarity='precomputed')
embeddings = mds.fit_transform(dist_matrix)
_, ax = plt.subplots(figsize=(10,10))
ax.scatter(embeddings[:,0], embeddings[:,1], alpha=0)
for i in range(len(vectors)):
    ax.annotate(his_tokens[i], ((embeddings[i,0], embeddings[i,1])))
/opt/conda/lib/python3.10/site-packages/sklearn/manifold/_mds.py:299: FutureWarning: The default value of `normalized_stress` will change to `'auto'` in version 1.4. To suppress this warning, manually set the value of `normalized_stress`.
  warnings.warn(

What kinds of semantic relationships exist in the diagram above?

Are there any words that seem out of place?

Saving & Loading Models

# Save current model for later use

model.save_word2vec_format('resources/word2vec.txtlab_Novel150_English.txt')
#model.save_word2vec_format('resources/word2vec.txtlab_Novel150_English.txt') # deprecated
# Load up models from disk

# Model trained on Eighteenth Century Collections Online corpus (~2500 texts)
# Made available by Ryan Heuser: http://ryanheuser.org/word-vectors-1/

ecco_model = gensim.models.KeyedVectors.load_word2vec_format('resources/word2vec.ECCO-TCP.txt')
#ecco_model = gensim.models.Word2Vec.load_word2vec_format('resources/word2vec.ECCO-TCP.txt') # deprecated
# What are similar words to BANK?

ecco_model.most_similar('bank')
[('ground', 0.657000720500946),
 ('turf', 0.6564096808433533),
 ('surface', 0.6480724811553955),
 ('declivity', 0.642420768737793),
 ('hill', 0.637111485004425),
 ('bridge', 0.6332241296768188),
 ('terrace', 0.6301186084747314),
 ('channel', 0.629577100276947),
 ('banks', 0.6294739246368408),
 ('wall', 0.6289103627204895)]
# What if we remove the sense of "river bank"?

ecco_model.most_similar(positive=['bank'], negative=['river'])
[('currency', 0.36714255809783936),
 ('suit', 0.35922902822494507),
 ('stamp', 0.35820379853248596),
 ('promissory', 0.35605305433273315),
 ('pension', 0.35183224081993103),
 ('blank', 0.3518177568912506),
 ('payable', 0.34270504117012024),
 ('mortality', 0.34262457489967346),
 ('weekly', 0.3408060371875763),
 ('weal', 0.3309359848499298)]

Exercises!

See if you can attempt the following exercises on your own!

## EX. Use the most_similar method to find the tokens nearest to 'car' in either model.
##     Do the same for 'motorcar'.

## Q.  What characterizes these two words inthe corpus? Does this make sense?

model.most_similar('car')
[('hansom', 0.7500696778297424),
 ('taxi', 0.7478840947151184),
 ('cars', 0.7394878268241882),
 ('buggy', 0.7370666861534119),
 ('wagon', 0.7363459467887878),
 ('motor', 0.7324641346931458),
 ('omnibus', 0.7272354960441589),
 ('bus', 0.7186578512191772),
 ('cab', 0.711317777633667),
 ('sled', 0.7040993571281433)]
model.most_similar('motorcar')
[('haha', 0.7878643870353699),
 ('laundry', 0.7624444961547852),
 ('hoop', 0.7621448040008545),
 ('hallway', 0.747283399105072),
 ('taxi', 0.7455681562423706),
 ('slowed', 0.7431114315986633),
 ('broom', 0.7404183149337769),
 ('latchkey', 0.739296555519104),
 ('joness', 0.7392609119415283),
 ('shack', 0.7387081384658813)]
## EX. How does our model answer the analogy: MADRID is to SPAIN as PARIS is to __________

## Q.  What has our model learned about nation-states?


model.most_similar(positive=['paris', 'spain'], negative = ['madrid'])
[('france', 0.7266117334365845),
 ('europe', 0.703520655632019),
 ('england', 0.6902426481246948),
 ('rome', 0.684619128704071),
 ('italy', 0.6807969212532043),
 ('germany', 0.6742438077926636),
 ('greece', 0.6369345784187317),
 ('london', 0.6132417917251587),
 ('america', 0.5939120054244995),
 ('india', 0.5838022232055664)]
## EX. Perform the canonic Word2Vec addition again but leave out a term:
##     Try 'king' - 'man', 'woman' - 'man', 'woman' + 'king'

## Q.  What do these indicate semantically?

model.most_similar(positive= ['woman'], negative=['man'])
[('maiden', 0.4955204129219055),
 ('louisa', 0.48071783781051636),
 ('adorable', 0.47827956080436707),
 ('charms', 0.46611225605010986),
 ('lover', 0.4660607874393463),
 ('maid', 0.44939324259757996),
 ('flora', 0.447085440158844),
 ('jane', 0.44704630970954895),
 ('lucilla', 0.43248656392097473),
 ('innocent', 0.43181905150413513)]
## EX. Heuser's blog post explores an analogy in eighteenth-century thought that
##     RICHES are to VIRTUE what LEARNING is to GENIUS. How true is this in
##     the ECCO-trained Word2Vec model? Is it true in the one we trained?

##  Q. How might we compare word2vec models more generally?
# ECCO model: RICHES are to VIRTUE what LEARNING is to ??

ecco_model.most_similar(positive=['learning', 'virtue'], negative=['riches'])
[('piety', 0.7372760772705078),
 ('morality', 0.7266900539398193),
 ('science', 0.6974709630012512),
 ('prudence', 0.6855395436286926),
 ('philosophy', 0.683079183101654),
 ('wisdom', 0.6511391997337341),
 ('genius', 0.6505820155143738),
 ('humanity', 0.640283465385437),
 ('modesty', 0.6369403004646301),
 ('morals', 0.6340599656105042)]
# txtLab model: RICHES are to VIRTUE what LEARNING is to ??
model.most_similar(positive=['learning', 'virtue'], negative=['riches'])
[('teaching', 0.5970186591148376),
 ('mathematics', 0.5865542888641357),
 ('chemistry', 0.5711618661880493),
 ('poetry', 0.5596555471420288),
 ('precept', 0.5438899993896484),
 ('believer', 0.5431545972824097),
 ('deficient', 0.5400426983833313),
 ('poetical', 0.5400040745735168),
 ('virgil', 0.5367878675460815),
 ('yankee', 0.5292307734489441)]

Concluding Remarks and Resources

Throughout this notebook we have seen how a number of mathematical operations can be used to explore word2vec’s word embeddings. Hopefully this notebook has allowed you to see how the inherent biases of language become coded into word embeddings and systems that use word embeddings cannot be treated as search engines.

While getting inside the technics of these computational processes can enable us to answer a set of new, interesting questions dealing with semantics, there are many other questions that remain unanswered.

For example: * Many language models are built using text from large, online corpora (such as Wikipedia, which is known to have a contributor basis that is majority white, college-educated men) - what kind of impact might this have on a language model? * What barriers to the healthy functioning of democracy are created by the widespread use of these tools and technologies in society? * How might language models challenge or renegotiate ideas around copyright, intellectual property and conceptions of authorship more broadly? * What might guardrails look like for the safe and equitable management and deployment of language models?

Other Resources for Further Learning

References

This notebook has been built using the following materials: - Arseniev-Koehler, A., & Foster, J. G. (2020). Sociolinguistic Properties of Word Embeddings [Preprint]. SocArXiv. https://doi.org/10.31235/osf.io/b8kud - Schiffers, R., Kern, D., & Hienert, D. (2023). Evaluation of Word Embeddings for the Social Sciences (arXiv:2302.06174). arXiv. http://arxiv.org/abs/2302.06174