Problems with the Syuzhet Package

I’ve been watching the developments with Matthew Jockers’s Syuzhet package and blog posts with interest over the last few months. I’m always excited to try new tools that I can bring into both the classroom and my own research. For those of you who are just now hearing about it, Syuzhet is a package for extracting and plotting the “emotional trajectory” of a novel.

The Syuzhet algorithm works as follows: First, you take the novel and split it up into sentences. Then, you use sentiment analysis to assign a positive or negative number to each sentence indicating how positive the sentence is. For example, “I’m happy” and “I like this” would have positive numbers, while “This is terrible” and “Everything is awful” would get negative numbers. Finally, you smooth out these numbers to get what Jockers calls the “foundation shape” of the novel, a smooth graph of how emotion rises and falls over the course of the novel’s plot.

This is an interesting idea, and I installed the package to try it out, but I’ve encountered several substantial problems along the way that challenge Jockers’s conclusion that he has discovered “six, or possibly seven, archetypal plot shapes” common to novels. I communicated privately with him about some of these issues last month, and I hope these problems will be addressed in the next version of the package. Until then, users should be aware that the package does not work as advertised.

I’ll proceed step-by-step through the process of using the package, explaining the problems at each step.

1. Splitting Sentences

The first step of the algorithm is to split the text into sentences using Syuzhet’s “get_sentences” function. I tried running this on Charles Dickens’s Bleak House, and immediately ran into trouble: in many places, especially around dialogue, Syuzhet incorrectly interpreted multiple sentences as being just one sentence. This seemed to be particularly common around quotation marks. For example, here’s one “sentence” from the middle of Chapter III, according to Syuzhet:[1]

Mrs. Rachael, I needn’t inform you who were acquainted with the late Miss Barbary’s affairs, that her means die with her and that this young lady, now her aunt is dead–”

“My aunt, sir!”

“It is really of no use carrying on a deception when no object is to be gained by it,” said Mr. Kenge smoothly, “Aunt in fact, though not in law.

As you can imagine, these grouping errors are likely to cause problems for works with extensive dialogue (such as most novels and short stories).[2]

2. Assigning Value to Words

The second step is to compute the emotional valence of each sentence, a problem known as sentiment analysis. The Syuzhet package provides four options for sentiment analysis: “Bing”, “AFINN”, “NRC”, and “Stanford”; “Bing” is the default, and is what Jockers recommends in his documentation.

“Bing,” “AFINN,” and “NRC” are all simple lexicons:  each is a list of words with a precomputed positive or negative “score” for each word, and Syuzhet computes the valence of a sentence by simply adding together the scores of every word in it.

This approach has a number of drawbacks:

  1. Since each word is scored in isolation, it can’t process modifiers. This means firstly that intensifiers have no effect, so that adding “very” or “extremely” won’t change the valence, and secondly (and more worryingly) that negations have no effect. Consequently, the sentence “I am not happy today” has exactly the same positive valence as “I am extremely happy today” or just “I’m happy.”
  2. For the same reason, the algorithm can’t take the multiple meanings of words into consideration, so words such as “well” and “like” are often marked as positive, even when they’re used in neutral ways. The “Bing” lexicon, for example, considers the sentence “I am happy” to be less positive than the sentence “Well, it’s like a potato.”[3]
  3. All three lexicons primarily contain contemporary English words, because they were developed for analyzing modern documents like product reviews and tweets. As a result, words of dialect may produce neutral values regardless of their actual emotional valence, and words whose meanings have changed since the Victorian period may have scores that do not at all reflect their use in the text. For example, “noisome,” “odours,” “execrations,” and “sulphurous” are negative words in Portrait of the Artist but are not negative in Bing’s lexicon.
  4. Syuzhet’s particular implementation of this approach only counts a word once for a given sentence even if it’s repeated, so that e.g. “I am happy–so happy–today” has the same valence as “I am happy today.”
  5. These lexicons also do not provide much nuance: Bing and NRC assign every word a value of -1 (negative terms), 0 (neutral terms), or 1 (positive terms). Thus, the two sentences “This is decent” and “This is wonderful!” both have valence 1, even though the second is clearly much more positive.

To demonstrate some of these problems, I composed the following simple paragraph:

I haven’t been sad in a long time.
I am extremely happy today.
It’s a good day.
But suddenly I’m only a little bit happy.
Then I’m not happy at all.
In fact, I am now the least happy person on the planet.
There is no happiness left in me.
Wait, it’s returned!
I don’t feel so bad after all!

According to common sense, we’d expect the sentiment assigned to these sentences to start off fairly high, then decline rapidly from lines 4 to 7, and finally return to neutral (or slightly positive) at the end.

Using the Syuzhet package, we get the following sentiment trajectory:


The emotional trajectory does pretty much exactly the opposite of what we expected. It starts negative, because “I haven’t been sad in a long time” contains only one word with a recognized value, which is “sad.” Then it rises to be at the same level of positivity for the next few lines, because “I am extremely happy today.” and “There is no happiness left in me” are equally positive. At the end, as the narrative turns hopeful again, Syuzhet’s trajectory drops back to negative because it detected the word “bad” in the sentence. [4]

This example showcases a number of the weaknesses of this sentiment analysis strategy on very straightforward text; I expect that these problems will be far worse for novels that contain emotion implied though metaphors or imagery patterns, or use satire and sarcasm (e.g. most works by Jane Austen, Jonathan Swift, Mark Twain, or Oscar Wilde), irony, or an unreliable narrator (e.g. much of postmodern literature).

Essentially, the Syuzhet package creates graphs of word frequency grouped by theme (positive and negative) throughout a text more than it does graphs of emotional valence in a text.

3. Foundation Shapes

The final step of Syuzhet is to turn the emotional trajectory into a Foundation Shape–a simplified graph of the story’s emotional valence that (hopefully) echoes the shape of the plot. But once again, I found some problems. Syuzhet produces the Foundation Shape by putting the emotional trajectory through an ideal low-pass filter, which is designed to eliminate the noise of the trajectory and smooth out its extremes. Ideal low-pass filters work by approximating the function with a fixed number of sinusoidal waves; the smaller the number of sinusoids, the smoother the resulting graph will be.

However, ideal low-pass filters often introduce extra lobes or humps in parts of the graph that aren’t well-approximated by sinusoids. These extra lobes are called ringing artifacts, and will be larger when the number of sinusoids is lower.

Here’s a simple example:


The graph on the left is the original signal, and the graph on the right demonstrates the ringing artifacts caused by a low-pass filter (specifically, by zeroing all but the first five terms of the Fourier transform). The original signal just has one lobe in the middle, but the low-pass filter introduces extra lobes on either side.

By default, Syuzhet uses an even lower cutoff than the example above (keeping only three Fourier terms). Consequently, we should expect to find inaccurate lobes in the resulting foundation shapes. The Portrait of the Artist foundation shape that Jockers presented in his post “Revealing Sentiment and Plot Arcs with the Syuzhet Package” already shows this: [5]


The full trajectory opens with a largely flat stretch and a strong negative spike around x=1100 that then rises back to be neutral by about x=1500. The foundation shape, on the other hand, opens with a rise, and in fact peaks in positivity right around where the original signal peaks in negativity. In other words, the foundation shape for the first part of the book is not merely inaccurate, but in fact exactly opposite the actual shape of the original graph.

This is a pretty serious problem, and it means that until Syuzhet provides filters that don’t cause ringing artifacts, it is likely that most foundation shapes will be inaccurate representations of the stories’ true plot trajectories.  Since the foundation shape may in places be the opposite of the emotional trajectory, two foundation shapes may look identical despite having opposing emotional valences. Jockers’s claim that he has derived “the six/seven plot archetypes” of literature from a sample of “41,383 novels” may be due more to ringing artifacts than to an actual similarity between the emotional structures of the analyzed novels.

While Syuzhet is a very interesting idea, its implementation suffers from a number of problems, including an unreliable sentence splitter, a sentiment analysis engine incapable of evaluating many sentences, and a foundation shape algorithm that fundamentally distorts the original data. Some of these problems may be fixable–there are certainly smoothing filters that don’t suffer from ringing artifacts[6]–and while I don’t know what the current state of the art in sentence detection is, I imagine algorithms exist that understand quotation marks. The failures of sentiment analysis, though, suggest that Syuzhet’s goals may not be realizable with existing tools. Until the foundation shapes and the problems with the implementation of sentiment analysis are addressed, the Syuzhet package cannot accomplish what it claims to do. I’m looking forward to seeing how these problems are addressed in future versions of the package.

Special Thanks:

I’d like to thank the following people who have consulted with me on sentiment analysis and signal processing and read versions of this blog post.

Daniel Lepage, Senior Software Engineer, Maternity Neighborhood

Rafael Frongillo, Postdoctoral Fellow, Center for Research on Computation and Society, Harvard University

Brian Gawalt, Senior Data Scientist, Elance-oDesk

Sarah Gontarek

[1] The excerpt doesn’t include quotation marks at the beginning and end because both the opening and closing sentences are part of larger passages of dialogue.

[2] This problem was not visible with the sample dataset of Portrait of the Artist, because the Project Gutenburg text uses dashes instead of quotation marks.

[3] This example also shows another problem: longer sentences may be given greater positivity or negativity than their contents warrant, merely because they have greater number of positive or negative words. For instance, “I am extremely happy!” would have a lower positivity ranking than “Well, I’m not really happy; today, I spilled my delicious, glorious coffee on my favorite shirt and it will never be clean again.”

[4] The Stanford algorithm is much more robust: it has more granularity in its categories of emotion and does consider negation. However, it also fails on the sample paragraph above, and it produced multiple “Not a Number” values when we ran it on Bleak House, rendering it unusable.

[5] Other scholars have also been noticing similar problems, as Jonathan Goodwin’s results demonstrate: (

[6] For example, Gaussian filters do not introduce ringing artifacts, though they have their own limitations (


SUNY New Paltz Funded a Digital Scholarship Center

I’m happy to report that SUNY New Paltz has funded an interdisciplinary digital scholarship center to be housed in the Sojourner Truth Library! My colleague Melissa Rock (Department of Geography) and I submitted a grant proposal for internal funds, and the President and Provost agreed to fully fund its initial start-up.  We’re very excited that the administration has decided to make digital scholarship such a high priority!

We’re still tossing around name ideas–the current leader is Digital Arts, Social Sciences, and Humanities Lab (DASSH Lab, for short)–and we won’t have access to our new space for a few months yet, but we’re working on setting up a temporary home as we gear up for workshops, training sessions for classes, and a speaker series.

Here’s the information about the center:

Faculty members in departments throughout the university, including Geography, English, Education, Anthropology, Computer Science, Biology, and Graphic Design, have expressed great interest in integrating digital technologies into their own research and classroom curricula. However, they lack the expertise, equipment, and access to space necessary to use these technologies effectively; most specialized computer labs are reserved for professors and students in that department, and most other computer labs, in addition to lacking specialized software, are consistently booked with classes. This center will provide the training, equipment, and software, and workshops necessary for faculty from throughout the campus to support teaching and learning with digital technology by creating digital video essays, podcasts, websites, digital archives and editions, and visualizations. This center is vital to ensure that SUNY New Paltz professors are using cutting-edge techniques in their research and pedagogy.

Stay tuned for more information!

Born Digital: From Archives to Maps

Below are links to the tools, data, instructions, and examples I mentioned in my talk on building digital humanities projects, given at SUNY New Paltz on December 3rd in the Honors College.DHborndigital

Digital Archives:

Tool: Omeka:

Data: “Civil Rights—A Long Road”:


Example: 19th Century Disability Studies:

Digital Editions:

Tool: Juxta Editions:

Data: The Strand Magazine:

and Sherlock Holmes full text:

Example: “The Five Orange Pips”:

Distant Reading:

Tool: Topic Modeling Tool:

Data: State of the Union addresses:


Example: Mining the Dispatch:


Tools:  Voyant:

Data: “Scandal in Bohemia”:


Tool: Google Fusion:

Data: NCES Education Data (2013):

Instructions (for Fusion):



Tool: Google Maps:


Example: Mapping Ulysses:

Digital History: Archives, Mapping, and Visualizations

Digital HistoryBelow are links from my 10/22 talk on Digital History.


Papers of the War Department:

Emergence of Advertising in America:
Votes for Women:

Victorian Dictionary:

Proceedings of the Old Bailey:


Locating London’s Past:

Mapping the Republic of Letters:

Invasion of America:

Slave Revolt in Jamaica:

Spread of Slavery:

Visualizing Emancipation:

Voting America: United States Politics, 1840-2008:

 3D Models:

Rome Reborn:

Virtual Paul’s Cross Project:

 Multimedia Archives:

Roaring Twenties, historical soundscape:

Library of Congress, Recorded Sound Reference Center:


Doing DH:

Programming Historian:

Spatial History Project:


Digital Anthropology Links

Here are the links from my 10/1 talk on Digital Anthropology:


DAACS (Digital Archaeological Archive of Comparative Slavery):

Digital Himalaya Project:

Inuvialuit Living History:

Rome Reborn: A Digital Model of Ancient Rome:

Chaco Research Archive:

World Oral Literature Project:

Digitized Diseases:

3D Printing:

West African Pipe Bowl, Model:

Cornell Creative Machines Lab:


DART: Digital Anthropology Resources for Teaching:


Tools for Creating Digital Projects:




Google Fusion Tables:

Google Maps:

Digital Humanities in English Departments: Beyond the Boundary of the Book

EnglishHonorsHere are the links for projects and tools from my 9/30 workshop at the Honors Center at SUNY New Paltz.


Mapping Ulysses:

Mapping the Lakes:

Map And Plan Collection Online:


Visualizing Heart of Darkness:

Voyant: (Shakespeare):


Women’s archives:

Orlando Project:

Women Writers Project:

Multimedia archives:

Global Shakespeare:


Individual archives:

Willa Cather Archive:

Walt Whitman Archive:

Archives of Journals:

The Making of America:

Modernist Journals Project:


Old Bailey Online:

BRANCH Collective (Britain, Representation, and Nineteenth-Century History):


Juxta Editions:


Digital Pedagogy Workshop

Below are links to the tools, projects, and resources I presented in a workshop on Digital Pedagogy at SUNY New Paltz.

Updated:  Here’s a link to a video of my talk:

Tools, Projects, and Resources

 Online discussion:

Google Docs:

Annotation Studio:



Voyant Tools:




Placing Literature:

Assigning Online Archives:

The Proceedings of the Old Bailey:

Library of Congress, Recorded Sound Reference Center:

Branch Collective:

Victorian Dictionary:

Student Projects:

Following the River (by Adi Fracchi):

Additional Resources:

ARC nodes (for peer-reviewed digital projects):

DiRT (Digital Research Tools):

Digital Humanities Questions and Answers:

DHSI class on Digital Pedagogy:


Hybrid Pedagogy:

Twitter, #digiped:

Journal of Interactive Technology & Pedagogy: