The Kind You Find in a Second Hand Store

The year I turned eight, I listened to Prince every Wednesday night. Wednesdays were my mother’s night to drive me and two other girls to and from swim practice. The practices were held at Wellesley College, and in my memory they started late — after dinner, certainly, though before dessert. The trip to the pool from my home was, in total, around twenty minutes, or five Prince songs. The album must have been a Best Of, for it contained Little Red Corvette and Raspberry Beret and When the Doves Cry and 1999. I loved them all, but the first two I loved the best, particularly Raspberry Beret, with its mix of Springsteen nostalgia and peacock exoticism.

I loved them, and yet they inspired a nagging dread, for they were always played on the way to practice, with its endless, burning laps, and never on the way back. (The way back was reserved for The Beatles, whom I to this day associate with roast chicken.)

I don’t know if it was because of the chlorinated dread his songs inspired, but I have not listened to Prince much in the twenty years since those carpool nights. And then tonight, I heard licks of Little Red Corvette as I was walking home from the subway. In front of Spike Lee’s studio on South Elliott Street, a block party was underway. A dj played 1999 and Around the World in a Day and Sign o’the Times while Lee himself grinned maniacally and exhorted the crowd with two lit iPhones. Small children swayed on shoulders. An older man, portly in a blue blazer, told his son that this night was about respecting his generation. Two women talked about the time they saw Prince in Savannah. A man reminisced about dancing to 1999 in its pre-YTK heyday. On a parked car, two small girls took selfies.

I have lived in Fort Greene for nearly four years now. I like the trees and the plentitude of cafes and transportation, and I love the park, but it’s never, until tonight, felt like a place I belonged to.

The Kind You Find in a Second Hand Store

So You Want to Be A Data Analyst, Chapter 3: The Relational Database

In his short story Funes, the Memorious, the Argentine writer Jorge Borges writes of a teenager, Funes, whose mind contains the world, and all of its history. To access this information, Funes devises a system of one-to-one enumeration.

“In place of seven thousand thirteen, he would say (for example) Mdximo Perez; in place of seven thousand fourteen, The Train; other numbers were Luis Melidn Lafinur, Olimar, Brimstone, Clubs, The Whale, Gas, The Cauldron, Napoleon, Agustin de Vedia. In lieu of five hundred, he would say nine.”

This system, the narrator points out, is ridiculously inefficient — indeed, it is a degredation of the grouping systems we already have, the hundreds and tens and birds and planets.

The original databases, magnetic tape that stored references to information, similar to punch cards, were more efficient than Funes’ enumeration system, but only slightly. The actual data and its reference were stored hierarchically: sunflower growth statistics, let’s say, rolls into sunflowers, which in term rolls into flowers. Any time a single record needed to be updated, the entire tape would be rewritten. And finding that record necessitated first finding its reference in a vast spool of tape — not quite needle:haystack, but close.

first_database

I won’t dive much further into database history, except to say that in 1969, a Brit named E.F. Codd published a paper that introduced the concept of relational databases, and explained their advantage over the hierarchical and network databases: namely, that they allow you and I to care only about what we are looking for, and not about where we must go to find it.

While these days, databases come in a variety of architectures, some hyper-structured, some basically conjoined jellyfish, the relational database is still the pre-eminent database for most mid- and large companies (though, if they are in the business of content or social network creation, it’s likely they have a few jellyfish as well).

As an analyst, I frequently want to uncover the relationship between two or more sets of data; relational databases are designed to expedite that discovery, by joining tables of data on a common key. The key is to data tables what mutual friends are to you and the guy you swiped right for. I like relational databases, because getting data out of them is very straightforward.

automat-new-york-roberts_8004_990x742
Source: National Geographic

In this post, I’m going to illustrate this straightforwardness (but with dogs, not Tinder. Wompwomp). We’ll go through what a relational data table looks like, how to create one in your RDB of choice, and how to query its contents and join it on a second table.

Let’s get started.

First, I want you to picture a spreadsheet. (Humor me, budding analyst.) Let’s say it’s a spreadsheet on dog adoptions, called “Adoption.” Adoption has three columns: Breed, Average Price, and Lifespan.

Breed Puppy Price Average Lifespan (years)
Labrador Retriever 950 11
St. Bernard 1,000 9
Irish Wolfhound 2,000 8
German Sheperd 700 11
Yorkie 750 14.5
Pug 800 13
French Bulldog 3,100 11
Chihuahua 500 17

 

You also have a second spreadsheet, “Health,” with Breed and Average Annual Cost.

Breed Average Annual Cost
Labrador Retriever 2569
St. Bernard 3321
Irish Wolfhound 3321
German Shepherd 2569
Yorkie 1831
Pug 2268
French Bulldog 2268
Chihuahua 1831

 

Let’s say you want to calculate the total cost per breed, because you’re a pragmatic person and you want to convince your husband that an Irish Wolfhound is a bad investment, compared to a chihuahua. In a spreadsheet scenario, these two tables would live in separate tabs, and if you wanted to calculate the total cost per breed, you might first use a VLOOKUP to join the average annual cost data to the average puppy price and lifespan data.

In a relational database scenario, these tables would live in schemas (an overarching container — akin to a top-level report folder, if you’re the organized sort). You would relate them to one another using a key column, in this case, “Breed,” and then multiply the Lifespan column from the Adoption table by the Average Annual Cost Column from the Health table.

from_relational_model
Source: Neo4j

A relational database can hold many schemas, each with dozens if not hundreds of tables, and any of them can be linked by their common columns.

In this silly 8 row example, there’s no need to have the data anywhere but a spreadsheet. But, when:

  1. You have millions of rows of data and/or
  2. You have nested data and/or
  3. You want to keep adding to the dataset programmatically and/or
  4. You want others to be able to access and possibly even update the dataset,

a relational database is far more efficient than a spreadsheet.

Time to talk about getting data into a relational database. The relational database I’m using is MySQL, because that’s what our database team set up for me, but the processes I’ll be going through will be similar for Postgres and Oracle. I’ll be connecting to MySQL using a free GUI tool called “MySQL Workbench,” which you can download here. The language I’ll be using to write to and read from it is Structured Query Language, aka “SQL.” I’ll write more about SQL in an upcoming post, but if you’re unfamiliar with it, just think of it as a way to specify the type and aggregation of information you want. 

Data goes into a database in one of two ways. If the data already exists in a file format, you create a table for it in the appropriate schema, and then you upload it to that table. If the data is not yet exposed to you, you pull it down from its source location, format it appropriately, and push it to the table you’ve created for it. Typically, this would be done in a script, and you could have that script run on a schedule, so that your data table(s) fill up on the reg.

In this tutorial, I’m making the assumption that you already have access to a database and a schema within that database, and that you’ve opened up a connection to that database, either through a GUI like MySQL Workbench or through the command line (your terminal app). A schema, by the way, is like a bureau, only it holds tables, instead of shelves. (Ack, mixing furniture metaphors here!)

All set? Cool, let’s build a table.

First, connect to your database using the connection strings provided to you. In MySQL Workbench, you connect in the Home panel, by clicking on the (+) button.

MySQL Workbench

After you’ve connected to your database, you need to specify the schema you’ll be using. If you already had a “dogs” schema, you’d run:

USE dogs

. As there is no dogs schema, you’ll need to create one.

CREATE SCHEMA IF NOT EXISTS dogs

In our money-efficient pets scenario, we want the adoption table and the health table. As the dogs schema is empty, we’ll need to create both. To create a table in SQL, you first give it a name, and then you list each of the columns, along with their type (integer, character, date etc), and other options like “NOT NULL” if the column fields can’t be blank, and “PRIMARY KEY” if you want the column to be/contribute to the uniqueness and matchiness of the table. (More on this later!)

CREATE TABLE adoption (Breed VARCHAR(45), Puppy_Price INT, Avg_Lifespan INT)

The above creates your table, adoption, and specifies the names and data types of its columns. VARCHAR holds varied character data (aka strings), INT is for numbers, DATE is for dates, DATETIME is for date+time stamps, BOOL is True/False (False=0, True>0), DECIMAL is for decimals (aka floats). Full definitions of these types and more are here.

Using the same structure used to create your adoption table, create your health table.

CREATE TABLE health (Breed VARCHAR(45), Avg_Annual_Cost INT)

Now that you’ve created your table structures, you can fill them with data. This might be done programmatically, by hitting up an API, or manually, by loading a file. In the pets case, let’s assume the adoption data and health data each live in CSVs, located in your local Pets folder. Load them with the following:

LOAD DATA LOCAL INFILE '~/Pets/adoption.csv' INTO TABLE adoption;
LOAD DATA LOCAL INFILE '~/Pets/health.csv' INTO TABLE health;

Check to see the data has loaded properly with a SELECT * (aka select ALL THE THINGS):

SELECT * FROM adoption
SELECT * FROM health

You should see your full tables returned.

Now that we have our tables, we can start grilling ’em.

The SELECT statement used above starts pretty much any basic SQL query you might have. All it does is tell your not-yet-specified table, “hey, I’m going to be asking for these not-yet-specified things.” Let’s add some specificity!

The big, burning question you have regarding your pets data is which pets are the most and least cost-efficient. To answer it, you’ll need to multiply the average annual cost for each pet from the health table by its lifespan from the adoption table, and add the adoption fees from the adoption table to this total. What this means, in SQL terms, is that the time has come for JOINS and SUMs.

To get your first answer — the average health care cost for each pet — you’ll need to JOIN the average annual cost and the lifespan data by the breed and adoption fee data.

SELECT health.Breed, health.Avg_Annual_Cost, adoption.Puppy_Price, adoption.Lifespan
FROM health 
INNER JOIN adoption
ON health.Breed = adoption.Breed
GROUP BY 1,2, 3 ORDER BY 1,4
Breed Price (puppy) Average Annual Cost Average Lifespan (years)
Labrador Retriever $950 $2,569 11
St. Bernard $1,000 $3,321 9
Irish Wolfhound $2,000 $3,321 8
German Sheperd $700 $2,569 11
Yorkie $750 $1,831 14.5
Pug $800 $2,268 13
French Bulldog $2,000 $2,268 11
Chihuahua $500 $1,831 17

Now you have all your pertinent data in one temporary table, but what you really want is to calculate the total health costs (average annual cost * lifespan) and add that to your puppy price. You can do this in your original JOIN statement.

SELECT health.Breed, ((health.Avg_Annual_Cost * adoption.lifespan)+adoption.Puppy_Price) as “Total Cost”
FROM health 
INNER JOIN adoption
ON health.Breed = adoption.Breed
GROUP BY 1 ORDER BY 2 DESC

And the results, which we’ve ordered by price, from most to least expensive:

 

Breed Total Cost
Chihuahua $31,627.00
St. Bernard $30,889.00
Pug $30,284.00
Labrador Retriever $29,209.00
German Sheperd $28,959.00
Irish Wolfhound $28,568.00
Yorkie $27,299.50
French Bulldog $26,948.00

Looks like the Chihauhua, by dint of its longevity, ends up costing the most. Go drop $3k on a Frenchie!

To recap, in this post, you learned about the reasons for using relational databases instead of spreadsheets, how to set up a schema and tables in a relational database, how to fill the tables with data from a file, and how to preform aggregations and joins on this data.

In the next post, I’ll dive further into SQL, and walk through inserting data into an RDB by other means (chiefly, through an API).

 

So You Want to Be A Data Analyst, Chapter 3: The Relational Database

So You Want to Be a Data Analyst, Chapter 2: The API (Is Just an ATM in The Sky)

Before you can extract any insights from data, you need to have the data in the first place. If you’re lucky, you do have the data sitting pretty in a database or can easily export it from its source origin. If you don’t, however, you can hit up an API to get it. You also may want to use an API even if the export function is available, for efficiency’s sake — if, for example, you’re pulling data from multiple Facebook pages, or you want three months’ worth of data and the export only allows for one month.

So, the answer to the question of why you’d use an API is: it gives you access to data that is otherwise unavailable or inefficiently available.

Wait, What Is an API?

Technically, it’s an Application Program Interface, but I think of it as a soul sister of the ATM. The typical ATM allows you to deposit funds, withdraw funds, and get information about your accounts, providing you have a debit card and know your PIN number. An API allows you to post data (deposit) and fetch data (balance inquiry) to endpoints (bank accounts), provided you have an API key (debit card and pin number).

As a data analyst, my interactions with APIs are mainly on the fetching side. In this post, I’ll be walking you fetching the authors of the most-shared articles on the New York Times, using the NYT’s Most Popular API.

Fetching Data from an API, Step 1: Obtain an API Key

An API key is typically a long string of numbers and letters that is issued to you by the application’s creators. It gives you varying degrees of access to the application’s data, with fetching (or making a GET request, in API parlance) being the lowest/most easily obtained level of access. The process of API key issuance differs from one API to another; to access the New York Times’ Most Popular API, you just need to register with their developer network, name your quote-un-quote application (in this case, just a script that fetches data), and select the correct API checkbox, and presto, the NYT will issue you a key.

Screen Shot 2016-01-18 at 11.34.40 AM

 

Fetching Data from an API, Step 2: Create a Get Request

Once you have your API key, you can use it to request data from the API, much as you might insert your debit card into an ATM and request balance info. API requests are made to the API’s URI, and typically contain the endpoint (specific data type), API key, and parameters like date that allow you to drill into a subset of the endpoint’s data. Requests to the NYT’s Most Popular api take the following format: http://api.nytimes.com/svc/mostpopular/v2/%5Bmostshared/mostemailed/mostviewed]/[all-sections/specific section]/days back: 1/7/30].json?api-key=[your key].

In your case, you want the most shared endpoint, and you want a month of data, so your request will look like this: http://api.nytimes.com/svc/mostpopular/v2/mostshared/all-sections/30.json?api-key=%5Byour key].

Put that into your url bar and see what gets returned. You should see a bunch of json objects, each containing metadata associated with one of the most shared articles.

Fetching Data From an API, Step 3: Get and Store the Response Data

Sure, you could copy all that json you see below the URL bar into a text document and manually locate the information you want (the authors), but that would be boring, time-consuming, and error-susceptible. Instead, you’ll make the request programmatically and store the response data in an object you’ll deconstruct in part 4.

You can use any programming language you like to interact with APIs, but this example uses python and its request library, mainly because python is really handy for analysis. Prereqs for this step are:

  1. text editor (I like Sublime Text)
  2. python (shipped with default build on Macs)
  3. admin access to your computer
  4. pip (or your python package installer of choice)
  5. requests library (which you’ll use to hit the Most Popular – most shares endpoint)

Provided you have admin access, you can get no. 4 by following these instructions. Once you have no. 4, you can get no.5 by opening Terminal (Spotlight → Terminal) and typing “pip install requests.”

Prereqs all taken care of? Open a new file in sublime text and save it as “most_popular.py”. This will be the python script that fetches and formats the most shared articles.

First, import the needed modules:

  • import requests
  • import json
  • import csv

Next, create the request and response objects:

pop_request = 'http://api.nytimes.com/svc/mostpopular/v2/mostshared/all-sections/30.json?api-key=[your key]'
pop = requests.get(pop_request)

“pop” holds the entire response, but if you print its variables (print vars(pop)), you’ll see that you’re just interested in the “results” part. Store that in its own variable and convert it to a dictionary:

pop_content = pop._content
pop_content = json.loads(pop_content)

Fetching Data from an API, Step 4: Extract the Necessary Data and Save It to a File

Now that you have your most shared article data stored in a big ol’ object, you’re going to parse it and extract what you need. First, in the python interpreter, you can take a closer look at pop_content to find the names of the fields you’re interested in:

import requests
import json
import csv
pop_request = 'http://api.nytimes.com/svc/mostpopular/v2/mostshared/all-sections/30.json?api-key=684a51023b7f88483e87f63e36c33e41:18:68385208'
pop = requests.get(pop_request)
pop_content = pop._content
pop_content = json.loads(pop_content)

for key, value in pop_content.items():
  if (key == "results"):
    for article in value:
      for field_name,field_value in article.items():
        print field_name

As the goal of this exercise is to find which authors contribute the most most shared articles, you’re going to want the “byline” field for sure, along with the “published_date” field and the “total_shares” field — though the latter is only useful for inference, as it doesn’t actually give share counts, only where the article ranks in the top 20.

Create an empty list for each of the three useful keys. To fill them, you’ll run through the results dictionary within the pop_content object, and save each value that corresponds to a byline, published_date, and total_shares field to a variable, and append that variable to its corresponding list.

dater = []
shares = []
authors = []
for k, v in pop_content.items():
  if (k == "results"):
    for a in v:
      pubdate = a["published_date"]
      dater.append(pubdate)
      author = a["byline"]
      authors.append(author)
      sharecount = a["total_shares"]
      shares.append(sharecount)

In the python IDE, you can inspect the contents of each list if you like, by typing “print [list name]”.

The final part of this process is exporting your cleaned and extracted data to a csv file for easypeasy analysis (later, I’ll go through how to analyze this data within the same python script as well as how to push it to a database and analyze it with SQL).

First, create the empty csv file and column headers:

csv_out = open('nyt_mostpop_week.csv', 'wb')
mywriter = csv.writer(csv_out)

Then stitch the lists together, write each row to your csv object, and close the csv file.

for row in zip(authors,dater,shares):
 mywriter.writerow(row)
csv_out.close()

Now, open terminal, chug into the folder you saved your nyt_mostpop.py script to, and run it (“python nyt_mostpop.py”).In finder, go to your nyt_mostpop.py folder and presto, you’ll see your new csv file.

As the file only has 20 rows, determining the author with the most appearances can be accomplished with a simple sort (or the “FREQUENCY” command). In my case, pulling data between December 19th, 2015 and January 18th, 2016, I get only one author appearing twice: Mr. Noam Scheiber. Congratulations, Noam!

The complete script for accessing NYT’s most shared articles can be viewed and downloaded here.

In chapter 3 of this series, I’ll address more sophisticated methods of data storage.

 

So You Want to Be a Data Analyst, Chapter 2: The API (Is Just an ATM in The Sky)

So You Want to Be a Data Analyst, Chapter 1: What You Need to Know If You Want to Work with Data and Are Not a Computer Scientist, Statistician, or Mathematician

A few years ago, I began the process of changing my career from marketing to data science. The company I was working for formed a sort of internal innovation lab to work on big data products; when I joined it, I stopped marketing enterprise reporting solutions for .NET developers and started marketing an email application that used machine learning to intelligently prioritize inboxes. What this meant, in practice, was that I got to abandon the arcane and chubby depths of the Microsoft BI stack for the open waters of text mining and AI and personalized recommendations, all of it built on reams and reams of data being chunked up and digested in parallel by hundreds of servers. I didn’t understand any of it any better than I understood the Microsoft stuff, but I wanted to.

funes_memorious

The path from nascent, toodling interest in big data to employment as a data analyst at a large media company was not exactly direct, and the knowledge that I acquired along the way is not exactly comprehensive (if I’m being generous, I have maybe 1/5th the skillset of a data scientist). It worked, though, because while there are quite a lot of companies today that are generating or are capable of generating massive amounts of data, there are comparably few people capable of making sense of that data.

For the rest of this series, I’ll be laboring under the assumption that you want to become one of those comparable few. In each chapter, I’ll explain one of the four aspects of quote-unquote “big data” analysis that I’ve found necessary to being an effective analyst. By the end, you should be able to, at a basic level, collect, organize, analyze, and visualize data. But first, I want you to know why each of these functions matter.

jackson_pollock_number5

Let’s go back to the penultimate sentence of the last paragraph: you, dear budding data analyst, should be able to collect data, organize data, analyze data, and visualize data. I fumbled along for quite some time before I realized that. For a while, I thought that in order to get an actual data job, I would need to be capable of building predictive and recommendation systems. And it’s true that a fair number of people do spend their time building one or both of these things (typically, these are the data scientists). But here’s another (not sexy enough to get written about) truth: for most companies, predictions and recommendations are the cherry on a sundae they haven’t yet bought. That is to say, in order to get to the point where predictions or recommendations are useful, let alone critical, a company first needs to a) have a lot of users, b) generate a lot of user data, c) store that data in a way that it can be accessed and analyzed, d) employ people who can clean and analyze and report on that data. Plenty of companies don’t even have a, and if they have a, they don’t have b, and they certainly don’t have c or d. And that is where you, with your new abilities to (say it with me) collect, organize, analyze, and visualize data, come in.

What You Need to Know Before You Begin and What You’ll Come to Know as You Learn

wallpaper_chaos_theory

Before you start dealing with big data sets, you should be comfortable getting basic summaries from small ones. Eg, given a table of newsletter opens and clicks for the past year, you should be able to get the monthly average, median, and standard deviation opens, clicks, and click rates (the latter two illustrate how regular your data is, or isn’t). In general, you should know what a table of data looks like. An understanding of Excel functions like SUM/AVERAGEIF, FREQUENCY, and LOOKUP functions will be helpful as well, as you’ll be essentially replicating them in SQL and/or R.

True (predictive) data science requires a solid grasp of calculus and statistics, as well as basic computer programming. Data analysis, however, requires only that you be willing to learn a bit of calculus (slope of a curve) and a bit of statistics (summary and significance of your results), and a bit of computer programming (interacting with APIs and cleaning the data those APIs return) — emphasis on the “willing to learn” bit. The following is a sample skillset you’ll amass as you become a data analyst.

  1. Collection
    1. write programmatic requests to specific API endpoints
      1. knowing what all of the above means!
  2. Organization
    1. parse the data object return from the API into a structure that you can push out into a csv
    2. or shove into a table in a database
    3. setting up said table in said database
  3. Analysis
    1. use Excel, R, or SQL to obtain aggregate, trended, and/or summary statistics on a filtered dataset
  4. Visualization
    1. using Python, R, Tableau, or Google Charts to graph or otherwise visually represent the output of your query

 

Sound good? Continue onto Chapter 2: An API Is Just an ATM in The Sky

So You Want to Be a Data Analyst, Chapter 1: What You Need to Know If You Want to Work with Data and Are Not a Computer Scientist, Statistician, or Mathematician

Folklore, Latin, and Aramaic(?): Language in Harry Potter

This past week, I spent some serious QT with an old friend: Harry Potter. The week had started out on the rougher side, with a pitbull attack that left my dog’s neck and my right calf somewhat worse for the wear. I wanted something comforting to read, the first night, and for some reason Harry Potter came to mind. All seven titles were available on my Oyster; I went for the sixth, raced through it to the seventh, and emerged much comforted, with a penchant for telling people I’d gotten a bad bite off a flobberworm.

Among the many upsides to growing up in the late 90’s and early millenia, the advent of Harry Potter, for my siblings and I, reigned supreme. I received Harry Potter and the Sorcerer’s Stone as a birthday present in fourth grade, shortly after its american publication date, and let it molder on my bookshelf until I heard some of my bus mates raving about it a few months later. From the first page, I was riveted — by the wit and cheer and fullness of Rowling’s magical world, by the quests, angst-filled and unasked-for, that Harry was behooved to undertake, and, of course, by Harry himself. Rowling gave her boy who lived a Dickensian upbringing to counter the comforts of Hogwarts, a bevy of talents (Quidditch, defense against Dark Arts, bravery) and not insignificant weaknesses (occulumency, potions, a bullheaded attraction to dangerous situations), great friends, powerful allies, and enemies who were truly, and inventively, evil, or else sinister and shift and, in the end, not evil at all. There is horror and tragedy a’plenty in Harry Potter, but unlike the dystopian teen dramas so popular today, there is also warmth and delight and hope — and language.

Harry Potter takes place in England, and the traits of the magical world are British to the hilt: cozy and eccentric and occasionally feudal. English is the predominate language for Rowling’s wizards, but it is an English enhanced with scads of Latin and plenty of neologisms, ancient mythology, history, and folklore. The former is the basis of nearly all spells, whether it is:

unaltered (“accio,” the summoning charm, “patronus,” the dementor-blocking guardians, “crucio”, the unforgivable torturing curse),

modified (the unforgivable controlling curse “imperio,” from impero and the blocking spell “impedimenta,” from impedio),

compounded (the beaver-teeth spell “densaugeo,” from dens, “teeth,” and augeo, “to enlarge”),

or

blended with english, as in the playful levitating charm “wingardium leviosa,” from wing +  arduus, “steep” + levo, “lift.”

Among the latinate spells lies one ghastly exception: the third unforgivable curse “avada kedavra,” which kills instantly, when uttered by those who truly mean it. According to Rowling, “avada kedavra” is an Aramaic epitaph meaning “may the thing be destroyed,” and is the origin of the more common “abracadabra”. All fittingly spooky, but I couldn’t find any concrete evidence of either (the OED says we get abracadabra from the late Greek abraxas, “supreme god”) and ??, an Aramaic scholar, disputes both out of hand. Certainly, “kedavra” bears a phonetic and possibly semantic resemblance to the latin “cadaver,” but the best estimates as to the latter’s origin cite the verb cadere, “to fall, sink down, perish.”

Moving over to herbology, many of the magical plants Harry encounters in Professor Sprout’s greenhouses have Dahlian names — that is to say whimsical, but at least partly tied to a real-world equivalent, as in the snargaluff pod, the abyssian shrivel fig, and the venomous tentacula. Animals, meanwhile, are mostly lifted wholesale from Greek mythology (hippogriffs, centaurs) and British and Slavonic folklore (e.g. boggarts, banshees, Dobby, padfoot from the former and the haunting and havoc-wreaking veela from the latter).

Another word apparently lifted from old Britain is parselmouth, which in the Harry Potter series denotes a person who can converse with snakes. Rowling says the word once referred to people with “mouth problems, like a hairlip.” As with “avada kedavra,” I wasn’t able to find proof of this, but it may well be true: the “par” in parcel means “to divide,” and the parsnip was so named for its forked root.

I won’t get into proper nouns, as nearly all are real, though exceedingly well thought out. However, there is a very interesting story around Diagon Alley, the wizarding world’s equivalent of Portobello Market. Rowling got to the name through tmesis, chopping “diagonally” into “diagon” and “alley” and she did so very intentionally: back in early Britain, so-called “ley lines” connected places of power, denoted by markers and mounds. Later, the Romans built roads on the lines, and the Church built churches on the markers and the nobility built castles on the mounds, all in the name of preserving the sanctity, and thus the power, of the originals.

Folklore, Latin, and Aramaic(?): Language in Harry Potter

You Can’t Keep a Good Woman Down, or ALS vs the Indomitable Esther Lindwall

Merriam-Webster defines indomitable as “impossible to defeat or discourage.” The word entered English in the 1630s, when England itself was gearing up for civil war. It’s a negation of the the latin domitare, from domare, “to tame.” The “dom” root gives us lots of relations to tame, from domestic to domicile, but indomitable appears to be its sole negation.

This past Friday, I took the train from Penn Station to Albany, where my sister picked me up and skedaddled me from Rensselaer’s spaghetti streets to my father’s farm, in southwest Vermont. It was my third trip to the farm since Labor Day, but the first two had been designed purely around hiking and eating and drinking local beers, whereas this one was dedicated to mourning the passing and celebrating the life of my grandmother Ester, known to me as Mormor, who had passed away a few weeks prior.

In 1995, Ester was diagnosed with Amyotrophic Lateral Sclerosis, or ALS, which you may know as the reason so many people were pouring ice water over themselves last summer. Unlike Lucius Lyons’, my grandmother’s diagnosis was spot-on. When her daughter married my father, in 1999, Ester was still independently mobile, though she could no longer dance, an activity she loved second only to gabbing. Over the next sixteen years, all of the activities she loved: the ability to walk, to pour a glass of wine, to stir a pot of swedish meatballs, ice a hot milk cake —  would be taken from her in fits and spurts.

ALS afflicts the body mercilessly, shutting down the motor neurons that normally serve as a conduit between brain and muscle, which causes massive atrophy and musculskeltal pain. Mormor — fairly petit to begin with— was smaller each time I saw her, her skin hung like sheets off her arms and her head appeared outsized atop an increasingly bobbly neck. In the beginning, she could still speak, though the words came out slowwwwly, and somewhat tinnily, as though through a transistor radio. Eventually, slow became a crawl became a valiant fight to get a single word out. To me, those words might be “Brady,” my husband, or “Brooklyn,” where I now live and where she worked as a nurse during the second world war. For all the unsaid words, she used her eyes and the occasional extension of her hand.

ALS afflicts the body but it spares the mind, and Mormor used hers to escape her increasingly cramped jail. She had a velcro memory and a deeply Swedish sense of humor (dopey-dark, with a subtle punch line). Her Lena and Ollie jokes were often sidelined by peals of laughter that her disease prolonged, and she saw, even, the Swedish humor in that — a disease that lets you start laughing but won’t let you stop.

There is no cure for ALS; its husking out of the body is, typically, fatal. Yet Mormor lived to be ninety-four, and the last twenty years of her life, the disease years, were spent, to the best of her ability, as she’d spent the first seventy four: making and maintaining friendships, keeping apprised of family goings-on, learning (she became, in her last years, a wicked cribbage player). At her funeral, the line of people who wanted to speak was a microcosm of the people she’d known through thick and thin: children and grandchildren, old friends and new.

ALS is cruel and vicious, but it couldn’t defeat the essence of my grandmother. Shortly before she died, one of the staff members at her hospice celebrated a birthday, and Mormor sang to her — four sentences of song, the words clear, from a woman who hadn’t sang in years.

You Can’t Keep a Good Woman Down, or ALS vs the Indomitable Esther Lindwall

The History of Accidence in America (or the name of a thing that may be seene, felt, heard, or understood)

My mother was, and remains, a Latin teacher, and Latin was, and remains a common point of interest between her and my father, who were both classics majors and met at the classics booth on my mother’s first day of shopping classes (my father was the booth boy, and my mother took the bait, at least for 16 years). Defiantly, I took French, and later, Spanish, and later still, Arabic and Italian, and never managed to absorb more than a few phrases of Latin, the main one being puer molestus, or “annoying boy,” which my mother, a teacher of teenagers, had many occasions to utter.

Latin has been a peripheral item in my life, but it was tremendously important in the lives of early non-native Americans. In the Massachusetts Bay colony, English was the langue franca, but Latin was the lingua eruditum, language of official documents, and the men of God (and lucre) who wrote and received them. Thus, education, when it arrived in Massachusetts, was theological, and heavy on Latin. It was also, at the younger levels, open to all (men), regardless of class, and supported by taxes levied on the area public — and in these traits, it was revolutionary.

boston-latin-school-drawing

The first public school to open in Massachusetts was the Latin Grammar School, which is now known as the Boston Latin School, and has, through its 380 years, churned out its fair share of luminaries musical (Leonard Bernstein), philosophical (George Santayana), and political (Joe Kennedy). From its early days, Boston Latin was blessed formidable faculty, starting with one Ezekiel Cheever, who served as the school’s headmaster for nearly forty years.  (Among his students was the fearsome—and fearmongering—pastor Cotton Mather, who sent many “witches” to the gallows before backing off, picking up an interest in vaccinations, and, eventually, writing his old Latin teacher a glowing eulogy.)

ezekiel_cheever

Cheever, English-born and Cambridge-educated, was one of, if not the first, professional educators in the colonies. Before he came to Boston, he taught Latin at a string of New England towns: New Haven, Ipswich, and Charlestown. Along the way, he wrote many notes on Latin instruction, and after he retired, these notes were economically compiled into a little textbook whose proper title was Cheever’s Latin Accidence: An Elementary Grammar, for Beginners in the Study of the Latin Language (to the cool kids, it was just Accidence.)

Accidence was published in 1708, and became the standardized Latin textbook for one and a half centuries. As someone who doesn’t read Latin, I’m not the best person to judge its efficacy, but the English excerpts I’ve read are clear and concise. Cheever’s definitions of nouns and verbs could well be used in English class today.

“A Noune is the name of a thing that may be seen, felt, heard or understand…A Verbe is a parte of speche, declined with mode and tense, and betokenenth dooying: as Amo, I love: or suffering: as Amor, I am loved: or beeying: as Sum, I am.

To me, the most curious thing about Accidence is its title. “Accidence” is not a word I often run across, and it has two meanings that are quite different from its’ common cousin’s: the philosophical “non-essential or incidental characteristic,” and the grammatical “part of grammar dealing with inflections.” The latter seems to be the likelier source: inflections are the changes in word form for the expression of tense, person, case etc, and would be a subject of constant coverage by a Latin teacher, particularly one who made heavy use of oral repetition.

Screen Shot 2015-09-21 at 9.41.03 PM

Why our “accident” and Cheever’s “accidence,” which share the same root, ad – “to” + cadere – “fall”, have such different definitions is unclear to me, but the latter’s grammatical meaning came about in the 15th century, far after the former, which made its way into France in the 12th century, from the Latin accidentem. At any rate, Cheever, or Mr. Williams, appear to be the sole employers of “accidence” as a textbook title.

Do you remember much from any of your old textbooks? I remember the renderings of different porto-men in my sixth grade social studies textbook, and that’s about it. But Accidence had weight: it was likely the only textbook in many primary school classrooms for decades, until the popularity of Latin began to wane and make room for other subjects. In his eulogy for Cheever, Cotton Mather wrote “Master of Sentences, he gave us more/The(n) we in Our Sententiae had before.” Accidence was a dead-tree piece of that, but it gave life to years of early scholarship on par with the motherland’s, and when it came time to break away, that scholarship helped us run.

The History of Accidence in America (or the name of a thing that may be seene, felt, heard, or understood)