Problems with the Syuzhet Package

I’ve been watching the developments with Matthew Jockers’s Syuzhet package and blog posts with interest over the last few months. I’m always excited to try new tools that I can bring into both the classroom and my own research. For those of you who are just now hearing about it, Syuzhet is a package for extracting and plotting the “emotional trajectory” of a novel.

The Syuzhet algorithm works as follows: First, you take the novel and split it up into sentences. Then, you use sentiment analysis to assign a positive or negative number to each sentence indicating how positive the sentence is. For example, “I’m happy” and “I like this” would have positive numbers, while “This is terrible” and “Everything is awful” would get negative numbers. Finally, you smooth out these numbers to get what Jockers calls the “foundation shape” of the novel, a smooth graph of how emotion rises and falls over the course of the novel’s plot.

This is an interesting idea, and I installed the package to try it out, but I’ve encountered several substantial problems along the way that challenge Jockers’s conclusion that he has discovered “six, or possibly seven, archetypal plot shapes” common to novels. I communicated privately with him about some of these issues last month, and I hope these problems will be addressed in the next version of the package. Until then, users should be aware that the package does not work as advertised.

I’ll proceed step-by-step through the process of using the package, explaining the problems at each step.

1. Splitting Sentences

The first step of the algorithm is to split the text into sentences using Syuzhet’s “get_sentences” function. I tried running this on Charles Dickens’s Bleak House, and immediately ran into trouble: in many places, especially around dialogue, Syuzhet incorrectly interpreted multiple sentences as being just one sentence. This seemed to be particularly common around quotation marks. For example, here’s one “sentence” from the middle of Chapter III, according to Syuzhet:[1]

Mrs. Rachael, I needn’t inform you who were acquainted with the late Miss Barbary’s affairs, that her means die with her and that this young lady, now her aunt is dead–”

“My aunt, sir!”

“It is really of no use carrying on a deception when no object is to be gained by it,” said Mr. Kenge smoothly, “Aunt in fact, though not in law.

As you can imagine, these grouping errors are likely to cause problems for works with extensive dialogue (such as most novels and short stories).[2]

2. Assigning Value to Words

The second step is to compute the emotional valence of each sentence, a problem known as sentiment analysis. The Syuzhet package provides four options for sentiment analysis: “Bing”, “AFINN”, “NRC”, and “Stanford”; “Bing” is the default, and is what Jockers recommends in his documentation.

“Bing,” “AFINN,” and “NRC” are all simple lexicons:  each is a list of words with a precomputed positive or negative “score” for each word, and Syuzhet computes the valence of a sentence by simply adding together the scores of every word in it.

This approach has a number of drawbacks:

  1. Since each word is scored in isolation, it can’t process modifiers. This means firstly that intensifiers have no effect, so that adding “very” or “extremely” won’t change the valence, and secondly (and more worryingly) that negations have no effect. Consequently, the sentence “I am not happy today” has exactly the same positive valence as “I am extremely happy today” or just “I’m happy.”
  2. For the same reason, the algorithm can’t take the multiple meanings of words into consideration, so words such as “well” and “like” are often marked as positive, even when they’re used in neutral ways. The “Bing” lexicon, for example, considers the sentence “I am happy” to be less positive than the sentence “Well, it’s like a potato.”[3]
  3. All three lexicons primarily contain contemporary English words, because they were developed for analyzing modern documents like product reviews and tweets. As a result, words of dialect may produce neutral values regardless of their actual emotional valence, and words whose meanings have changed since the Victorian period may have scores that do not at all reflect their use in the text. For example, “noisome,” “odours,” “execrations,” and “sulphurous” are negative words in Portrait of the Artist but are not negative in Bing’s lexicon.
  4. Syuzhet’s particular implementation of this approach only counts a word once for a given sentence even if it’s repeated, so that e.g. “I am happy–so happy–today” has the same valence as “I am happy today.”
  5. These lexicons also do not provide much nuance: Bing and NRC assign every word a value of -1 (negative terms), 0 (neutral terms), or 1 (positive terms). Thus, the two sentences “This is decent” and “This is wonderful!” both have valence 1, even though the second is clearly much more positive.

To demonstrate some of these problems, I composed the following simple paragraph:

I haven’t been sad in a long time.
I am extremely happy today.
It’s a good day.
But suddenly I’m only a little bit happy.
Then I’m not happy at all.
In fact, I am now the least happy person on the planet.
There is no happiness left in me.
Wait, it’s returned!
I don’t feel so bad after all!

According to common sense, we’d expect the sentiment assigned to these sentences to start off fairly high, then decline rapidly from lines 4 to 7, and finally return to neutral (or slightly positive) at the end.

Using the Syuzhet package, we get the following sentiment trajectory:

sample_paragraph

The emotional trajectory does pretty much exactly the opposite of what we expected. It starts negative, because “I haven’t been sad in a long time” contains only one word with a recognized value, which is “sad.” Then it rises to be at the same level of positivity for the next few lines, because “I am extremely happy today.” and “There is no happiness left in me” are equally positive. At the end, as the narrative turns hopeful again, Syuzhet’s trajectory drops back to negative because it detected the word “bad” in the sentence. [4]

This example showcases a number of the weaknesses of this sentiment analysis strategy on very straightforward text; I expect that these problems will be far worse for novels that contain emotion implied though metaphors or imagery patterns, or use satire and sarcasm (e.g. most works by Jane Austen, Jonathan Swift, Mark Twain, or Oscar Wilde), irony, or an unreliable narrator (e.g. much of postmodern literature).

Essentially, the Syuzhet package creates graphs of word frequency grouped by theme (positive and negative) throughout a text more than it does graphs of emotional valence in a text.

3. Foundation Shapes

The final step of Syuzhet is to turn the emotional trajectory into a Foundation Shape–a simplified graph of the story’s emotional valence that (hopefully) echoes the shape of the plot. But once again, I found some problems. Syuzhet produces the Foundation Shape by putting the emotional trajectory through an ideal low-pass filter, which is designed to eliminate the noise of the trajectory and smooth out its extremes. Ideal low-pass filters work by approximating the function with a fixed number of sinusoidal waves; the smaller the number of sinusoids, the smoother the resulting graph will be.

However, ideal low-pass filters often introduce extra lobes or humps in parts of the graph that aren’t well-approximated by sinusoids. These extra lobes are called ringing artifacts, and will be larger when the number of sinusoids is lower.

Here’s a simple example:

original_signallow_pass_5

The graph on the left is the original signal, and the graph on the right demonstrates the ringing artifacts caused by a low-pass filter (specifically, by zeroing all but the first five terms of the Fourier transform). The original signal just has one lobe in the middle, but the low-pass filter introduces extra lobes on either side.

By default, Syuzhet uses an even lower cutoff than the example above (keeping only three Fourier terms). Consequently, we should expect to find inaccurate lobes in the resulting foundation shapes. The Portrait of the Artist foundation shape that Jockers presented in his post “Revealing Sentiment and Plot Arcs with the Syuzhet Package” already shows this: [5]

noisyfoundation

The full trajectory opens with a largely flat stretch and a strong negative spike around x=1100 that then rises back to be neutral by about x=1500. The foundation shape, on the other hand, opens with a rise, and in fact peaks in positivity right around where the original signal peaks in negativity. In other words, the foundation shape for the first part of the book is not merely inaccurate, but in fact exactly opposite the actual shape of the original graph.

This is a pretty serious problem, and it means that until Syuzhet provides filters that don’t cause ringing artifacts, it is likely that most foundation shapes will be inaccurate representations of the stories’ true plot trajectories.  Since the foundation shape may in places be the opposite of the emotional trajectory, two foundation shapes may look identical despite having opposing emotional valences. Jockers’s claim that he has derived “the six/seven plot archetypes” of literature from a sample of “41,383 novels” may be due more to ringing artifacts than to an actual similarity between the emotional structures of the analyzed novels.

While Syuzhet is a very interesting idea, its implementation suffers from a number of problems, including an unreliable sentence splitter, a sentiment analysis engine incapable of evaluating many sentences, and a foundation shape algorithm that fundamentally distorts the original data. Some of these problems may be fixable–there are certainly smoothing filters that don’t suffer from ringing artifacts[6]–and while I don’t know what the current state of the art in sentence detection is, I imagine algorithms exist that understand quotation marks. The failures of sentiment analysis, though, suggest that Syuzhet’s goals may not be realizable with existing tools. Until the foundation shapes and the problems with the implementation of sentiment analysis are addressed, the Syuzhet package cannot accomplish what it claims to do. I’m looking forward to seeing how these problems are addressed in future versions of the package.

Special Thanks:

I’d like to thank the following people who have consulted with me on sentiment analysis and signal processing and read versions of this blog post.

Daniel Lepage, Senior Software Engineer, Maternity Neighborhood

Rafael Frongillo, Postdoctoral Fellow, Center for Research on Computation and Society, Harvard University

Brian Gawalt, Senior Data Scientist, Elance-oDesk

Sarah Gontarek

[1] The excerpt doesn’t include quotation marks at the beginning and end because both the opening and closing sentences are part of larger passages of dialogue.

[2] This problem was not visible with the sample dataset of Portrait of the Artist, because the Project Gutenburg text uses dashes instead of quotation marks.

[3] This example also shows another problem: longer sentences may be given greater positivity or negativity than their contents warrant, merely because they have greater number of positive or negative words. For instance, “I am extremely happy!” would have a lower positivity ranking than “Well, I’m not really happy; today, I spilled my delicious, glorious coffee on my favorite shirt and it will never be clean again.”

[4] The Stanford algorithm is much more robust: it has more granularity in its categories of emotion and does consider negation. However, it also fails on the sample paragraph above, and it produced multiple “Not a Number” values when we ran it on Bleak House, rendering it unusable.

[5] Other scholars have also been noticing similar problems, as Jonathan Goodwin’s results demonstrate: (https://twitter.com/joncgoodwin/status/563734388484354048/photo/1).

[6] For example, Gaussian filters do not introduce ringing artifacts, though they have their own limitations (http://homepages.inf.ed.ac.uk/rbf/HIPR2/freqfilt.htm).

SUNY New Paltz Funded a Digital Scholarship Center

I’m happy to report that SUNY New Paltz has funded an interdisciplinary digital scholarship center to be housed in the Sojourner Truth Library! My colleague Melissa Rock (Department of Geography) and I submitted a grant proposal for internal funds, and the President and Provost agreed to fully fund its initial start-up.  We’re very excited that the administration has decided to make digital scholarship such a high priority!

We’re still tossing around name ideas–the current leader is Digital Arts, Social Sciences, and Humanities Lab (DASSH Lab, for short)–and we won’t have access to our new space for a few months yet, but we’re working on setting up a temporary home as we gear up for workshops, training sessions for classes, and a speaker series.

Here’s the information about the center:

Faculty members in departments throughout the university, including Geography, English, Education, Anthropology, Computer Science, Biology, and Graphic Design, have expressed great interest in integrating digital technologies into their own research and classroom curricula. However, they lack the expertise, equipment, and access to space necessary to use these technologies effectively; most specialized computer labs are reserved for professors and students in that department, and most other computer labs, in addition to lacking specialized software, are consistently booked with classes. This center will provide the training, equipment, and software, and workshops necessary for faculty from throughout the campus to support teaching and learning with digital technology by creating digital video essays, podcasts, websites, digital archives and editions, and visualizations. This center is vital to ensure that SUNY New Paltz professors are using cutting-edge techniques in their research and pedagogy.

Stay tuned for more information!

Born Digital: From Archives to Maps

Below are links to the tools, data, instructions, and examples I mentioned in my talk on building digital humanities projects, given at SUNY New Paltz on December 3rd in the Honors College.DHborndigital

Digital Archives:

Tool: Omeka: https://www.omeka.net/

Data: “Civil Rights—A Long Road”: http://tinyurl.com/civilrightsimg

Instructions: http://programminghistorian.org/lessons/up-and-running-with-omeka

Example: 19th Century Disability Studies: http://www.nineteenthcenturydisability.org/

Digital Editions:

Tool: Juxta Editions: http://www.juxtaeditions.com/

Data: The Strand Magazine: https://archive.org/details/StrandMagazine9

and Sherlock Holmes full text: https://sherlock-holm.es/stories/plain-text/advs.txt

Instructions: http://sherlockholmeslondondh.wordpress.com/2014/10/08/juxta/
Example: “The Five Orange Pips”: https://www.juxtaeditions.com/documents/304

Distant Reading:

Tool: Topic Modeling Tool: https://topic-modeling-tool.googlecode.com/files/TopicModelingTool.jar

Data: State of the Union addresses: http://tinyurl.com/stateofunionzip

Instructions: http://sherlockholmeslondondh.wordpress.com/2014/10/27/topic-modeling/

Example: Mining the Dispatch: http://dsl.richmond.edu/dispatch/

 Visualizations:

Tools:  Voyant: http://voyant-tools.org/

Data: “Scandal in Bohemia”: https://sherlock-holm.es/stories/plain-text/scan.txt

 

Tool: Google Fusion: http://tables.googlelabs.com/

Data: NCES Education Data (2013): https://inventory.data.gov/dataset/032e19b4-5a90-41dc-83ff-6e4cd234f565/resource/38625c3d-5388-4c16-a30f-d105432553a4

Instructions (for Fusion): https://support.google.com/fusiontables/answer/184641?hl=en

Example: https://sites.google.com/site/fusiontablestalks/stories

 GIS:

Tool: Google Maps: https://www.google.com/maps

Instructions: https://support.google.com/maps/answer/3045850?hl=en

Example: Mapping Ulysses: https://sites.google.com/site/notesonjamesjoyce/map

Digital History: Archives, Mapping, and Visualizations

Digital HistoryBelow are links from my 10/22 talk on Digital History.

 Archives:

Papers of the War Department: http://wardepartmentpapers.org/index.php

Emergence of Advertising in America:
http://library.duke.edu/digitalcollections/eaa/
Votes for Women:
http://memory.loc.gov/ammem/naw/nawshome.html

Victorian Dictionary: http://www.victorianlondon.org/index-2012.htm

Proceedings of the Old Bailey: http://www.oldbaileyonline.org/

 Mapping:

Locating London’s Past: http://www.locatinglondon.org/

Mapping the Republic of Letters: http://republicofletters.stanford.edu/

Invasion of America: http://invasionofamerica.ehistory.org/

Slave Revolt in Jamaica: http://revolt.axismaps.com/project.html

Spread of Slavery: http://lincolnmullen.com/projects/slavery/

Visualizing Emancipation: http://dsl.richmond.edu/emancipation/

Voting America: United States Politics, 1840-2008: http://dsl.richmond.edu/voting/

 3D Models:

Rome Reborn: http://romereborn.frischerconsulting.com/gallery-current.php

Virtual Paul’s Cross Project: http://vpcp.chass.ncsu.edu/

 Multimedia Archives:

Roaring Twenties, historical soundscape: http://vectors.usc.edu/projects/index.php?project=98

Library of Congress, Recorded Sound Reference Center: http://www.loc.gov/rr/record/onlinecollections.html

 Resources:

Doing DH: http://history2014.doingdh.org/readings-and-resources/sites/

Programming Historian: http://programminghistorian.org/

Spatial History Project: http://web.stanford.edu/group/spatialhistory/cgi-bin/site/index.php

ARC: http://idhmc.tamu.edu/arcgrant/nodes/

Digital Anthropology Links

Here are the links from my 10/1 talk on Digital Anthropology:

Archives:

DAACS (Digital Archaeological Archive of Comparative Slavery): http://www.daacs.org/

Digital Himalaya Project: http://www.digitalhimalaya.com/

Inuvialuit Living History: http://www.inuvialuitlivinghistory.ca/

Rome Reborn: A Digital Model of Ancient Rome: http://romereborn.frischerconsulting.com/

Chaco Research Archive: http://www.chacoarchive.org/cra/

World Oral Literature Project: http://www.oralliterature.org/

Digitized Diseases: http://www.digitiseddiseases.org/alpha/

3D Printing:

West African Pipe Bowl, Model: http://www.thingiverse.com/thing:184617

Cornell Creative Machines Lab: http://creativemachines.cornell.edu/cuneiform

Pedagogy:

DART: Digital Anthropology Resources for Teaching: http://www.lse.ac.uk/anthropology/research/dart/dart.aspx

eFossils: http://efossils.org/

Tools for Creating Digital Projects:

Omeka: http://omeka.org/

StoryMaps: http://storymaps.arcgis.com/en/

MapBox: https://www.mapbox.com/

Google Fusion Tables: http://tables.googlelabs.com/

Google Maps: https://support.google.com/maps/answer/3045850?hl=en

Digital Humanities in English Departments: Beyond the Boundary of the Book

EnglishHonorsHere are the links for projects and tools from my 9/30 workshop at the Honors Center at SUNY New Paltz.

Mapping:

Mapping Ulysses:

https://sites.google.com/site/notesonjamesjoyce/map

Mapping the Lakes: http://www.lancaster.ac.uk/mappingthelakes/

Map And Plan Collection Online: http://mapco.net/

Visualizations:

Visualizing Heart of Darkness: http://www-958.ibm.com/software/analytics/manyeyes/visualize/joseph-conrad-heart-of-darkness-wi/versions/1

Voyant: (Shakespeare): http://voyant-tools.org/?corpus=shakespeare&stopList=stop.en.taporware.txt

Archives/Editions:

Women’s archives:

Orlando Project: http://orlando.cambridge.org/

Women Writers Project: http://www.wwp.northeastern.edu/wwo/

Multimedia archives:

Global Shakespeare: http://globalshakespeares.mit.edu/

PennSound: http://writing.upenn.edu/pennsound/

Individual archives:

Willa Cather Archive: http://cather.unl.edu/

Walt Whitman Archive: http://www.whitmanarchive.org/

Archives of Journals:

The Making of America: http://digital.library.cornell.edu/m/moa/

Modernist Journals Project: http://modjourn.org/journals.html

History:

Old Bailey Online: http://www.oldbaileyonline.org/

BRANCH Collective (Britain, Representation, and Nineteenth-Century History): http://www.branchcollective.org/

 Resources:

Juxta Editions: http://www.juxtaeditions.com

ARC: http://idhmc.tamu.edu/arcgrant/nodes/

Digital Pedagogy Workshop

Below are links to the tools, projects, and resources I presented in a workshop on Digital Pedagogy at SUNY New Paltz.

Updated:  Here’s a link to a video of my talk: http://sites.newpaltz.edu/tlc/2014/09/go-to-recording/

Tools, Projects, and Resources

 Online discussion:

Google Docs: http://www.docs.google.com

Annotation Studio: http://www.annotationstudio.org/

NowComment: http://nowcomment.com/

Visualization:

Voyant Tools: http://voyant-tools.org/

ManyEyes: http://www-958.ibm.com/software/analytics/labs/manyeyes/

Prism: http://prism.scholarslab.org/

Mapping:

Placing Literature: http://www.placingliterature.com/

Assigning Online Archives:

The Proceedings of the Old Bailey: http://www.oldbaileyonline.org/

Library of Congress, Recorded Sound Reference Center: http://www.loc.gov/rr/record/onlinecollections.html

Branch Collective: http://www.branchcollective.org/

Victorian Dictionary: http://www.victorianlondon.org/index-2012.htm

Student Projects:

Following the River (by Adi Fracchi): http://tinyurl.com/followingriver

Additional Resources:

ARC nodes (for peer-reviewed digital projects): http://idhmc.tamu.edu/arcgrant/nodes/

DiRT (Digital Research Tools): http://dirtdirectory.org/

Digital Humanities Questions and Answers: http://digitalhumanities.org/answers/

DHSI class on Digital Pedagogy: http://dhsi.org/courses.php

Blogs/Journals:

Hybrid Pedagogy: http://www.hybridpedagogy.com/

Twitter, #digiped: https://twitter.com/hashtag/digiped

Journal of Interactive Technology & Pedagogy: http://jitp.commons.gc.cuny.edu/