The second in a series of tutorials on using Python for introductory
statistical analysis, this tutorial covers data, including cases, variables,
samples, and a whole lot more. As always, the
iPython Notebook associated with
this tutorial is available here on github.
Data used in statistical modeling are usually organized into tables, often created using spreadsheet software. Most people presume that the same software used to create a table of data should be used to display and analyze it. This is part of the reason for the popularity of spreadsheet programs such as ‘Excel’ and ‘Google Spreadsheets’.
For serious statistical work, it’s helpful to take another approach that strictly separates the processes of data collection and of data analysis: use one program to create data ﬁles and another program to analyze the data stored in those ﬁles. By doing this, one guarantees that the original data are not modiﬁed accidentally in the process of analyzing them. This also makes it possible to perform many diﬀerent analyses of the data; modelers often create and compare many diﬀerent models of the same data.
Reading Tabular Data into Python¶
Data is central to statistics, and the tabular arrangement of data is very common. Accordingly, Python provides a large number of ways to read in tabular data. These vary depending on how the data are stored, where they are located, etc. To help keep things as simple as possible, the ‘pandas’ Python library iprovides an operator,
read_csv() that allows you to access data ﬁles in tabular format on your computer as well as data stored in repositories such as the one associated with the ‘Statistical Modeling: A Fresh Approach’ book, or one that a course instructor might set up for his or her students.
The ‘pandas’ library is available here, and you can follow these installation instructions to get it working on your computer (installation via
pip is the easiest method). Once you have ‘pandas’ installed, you need to
import pandas in order to to use
read_csv(), as well as a variety of other ‘pandas’ operators that you will encounter later (it is also usually a good idea to
import numpy as np at the same time that we
import pandas as pd).
An alternative to writing `pds.xxx` when calling each ‘pandas’ operator is to import all available operators from ‘pandas’ at once: `from pandas import *`. This makes things a bit easier in terms of typing, but can sometimes lead to confusion when operators from different libraries have the same name.
import pandas as pd import numpy as np
You need do this only once in each session of Python, and on systems such as IPython, the library will sometimes be reloaded automatically (if you get an error message, it’s likely that the ‘pandas’ library has not been installed on your system. Follow the installation instructions provided at the link above.)
Reading in a data table that’s been connected with
read_csv() is simply a matter of knowing the name (and location) of the data set. For instance, one data table used in examples in the ‘Statistical Modeling: A Fresh Approach’ book is
"swim100m.csv". To read in this data table and create an object in Python that contains the data, use a command like this:
swim = pd.read_csv("http://www.mosaic-web.org/go/datasets/swim100m.csv")
The csv part of the name in
"swim100m.csv" indicates that the ﬁle has been stored in a particular data format, comma-separated values that is handled by spreadsheet software as well as many other kinds of software. The part of this command that requires creativity is choosing a name for the Python object that will hold the data. In the above command it is called
swim, but you might prefer another name (e.g.,
sdata or even
ralph). Of course, it’s sensible to choose names that are short, easy to type and remember, and remind you what the contents of the object are about.
To help you identify data tables that can be accessed through
read_csv(), examples from these tutorials will be marked with a flag containing the name of the data file. The files themselves are mostly available automatically through the web site for the ‘Statistical Modeling: A Fresh Approach’ book.
The type of Python object created by
read_csv() is called a data frame and is essentially a tabular layout. To illustrate, here are the ﬁrst several cases of the
swim data frame created by the previous use of
Note that the
head() function, one of several functions built-into ‘pandas’ data frames, is a function of the Python object (data frame) itself; not from the main ‘pandas’ library.
Data frames, like tabular data generally, involve variables and cases. In ‘pandas’ data frames, each of the variables is given a name. You can refer to the variable by name in a couple of diﬀerent ways. To see the variable names in a data frame, something you might want to do to remind yourself of how names a spelled and capitalized, use the
columns attribute of the data frame object:
Index([u'year', u'time', u'sex'], dtype=object)
Note that we have not used brackets
() in the above command. This is because
columns is not a function; it is an attribute of the data frame. Attributes add ‘extra’ information (or metadata) to objects in the form of additional Python objects. In this case, the attributes describe the names (and data types) of the columns. Another way to get quick information about the variables in a data frame is with
This provides a numerical summary of each of the variables contained in the data frame. To keep things simple, the output from
describe() is itself a data frame.
There are lots of different functions and attributes available for data frames (and any other Python objects). For instance, to see how many cases and variables there are in a data frame, you can use the
Variables in Data Frames¶
Perhaps the most common operation on a data frame is to refer to the values in a single variable. The two ways you will most commonly use involve referring to a variable by string-quoted name (
swim["year"]) and as an attribute of a data frame without quotes (
Each column or variable in a ‘pandas’ data frame is called a ‘series’, and each series can contain one of many different data types. For more information on series’, data frames, and other objects in ‘pandas’, [have a look here][intro].
Most of the statistical modeling functions you will encounter in these tutorials are designed to work with data frames and allow you to refer directly to variables within a data frame. For instance:
It is also possible to combine ‘numpy’ operators with ‘pandas’ variables:
swim portion of the above commands tells Python which data frame we want to operate on. Leaving oﬀ that argument leads to an error:
--------------------------------------------------------------------------- NameError Traceback (most recent call last) <ipython-input-11-2ef03df1cde8> in <module>() ----> 1 year.min() NameError: name 'year' is not defined
Of course, you know that the variable year is deﬁned within the data frame
swim, but you have to tell Python which data frame you want to operate on explicitly, otherwise it doesn’t know where to find the variable(s). Think of this notation as referring to the variable by both its family name (the data frame’s name,
"swim") and its given name (
"year"), something like
The advantage of referring to variables by name becomes evident when you construct statements that involve more than one variable within a data frame. For instance, here’s a calculation of the mean year, separately for (grouping by) the different sexes:
sex F 1950.677419 M 1953.612903 Name: year, dtype: float64
You will see much more of the
groupby function, starting in Tutorial 4 (Group-wise Models). It’s the ‘pandas’ way of grouping or aggregating data frames. In subsequent chapters, we will build on this notion to develop more complex ways of “grouping” and “modeling” variables “by” other variables.
min() functions have been arranged by the ‘pandas’ library to look in the data frame when interpreting variables, but not all Python functions are designed this way. For instance:
--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-13-e6382fdf6716> in <module>() ----> 1 swim.year.sqrt() AttributeError: 'Series' object has no attribute 'sqrt'
When you encounter a function that isn’t supported by data frames, you can use ‘numpy’ functions and the special
apply function built-into data frames (note that the
func argument is optional):
swim.year.apply(func=np.sqrt).head() # There are 62 cases in total
0 43.646306 1 43.680659 2 43.703547 3 43.726422 4 43.794977 Name: year, dtype: float64
Alternatively, since columns are basically just arrays, we can use built-in numpy functions directly on the columns:
np.sqrt(swim.year).head() # Again, there are 62 cases in total
0 43.646306 1 43.680659 2 43.703547 3 43.726422 4 43.794977 Name: year, dtype: float64
Adding a New Variable¶
Sometimes you will compute a new quantity from the existing variables and want to treat this as a new variable. Adding a new variable to a data frame can be done similarly to accessing a variable. For instance, here is how to create a new variable in
swim that holds the
time converted from seconds to units of minutes:
swim['minutes'] = swim.time/60. # or swim['time']/60.
By default, columns get inserted at the end. The
insert function is available to insert at a particular location in the columns.
swim.insert(1, 'mins', swim.time/60.)
You could also, if you want, redeﬁne an existing variable, for instance:
swim['time'] = swim.time/60.
As always, we can take a quick look at the results of our operations by using the
head() fuction of our data frame:
Such assignment operations do not change the original file from which the data were read, only the data frame in the current session of Python. This is an advantage, since it means that your data in the data file stay in their original state and therefore won’t be corrupted by operations made during analysis.
Sampling from a Sample Frame¶
Much of statistical analysis is concerned with the consequences of drawing a sample from the population. Ideally, you will have a sampling frame that lists every member of the population from which the sample is to be drawn. With this in hand, you could treat the individual cases in the sampling frame as if they were cards in a deck of hands. To pick your random sample, shuffle the deck and deal out the desired number of cards.
When doing real work in the ﬁeld, you would use the randomly dealt cards to locate the real-world cases they correspond to. Sometimes in these tutorials, however, in order to let you explore the consequences of sampling, you will select a sample from an existing data set. For example, the
"kidsfeet.csv" data set has
kids = pd.read_csv("http://www.mosaic-web.org/go/datasets/kidsfeet.csv") kids.shape
There are a number of procedures to draw a random sample of 5 cases from this data frame. The preferred option however, is to randomly select a subset of case ids (in this case 5) using
np.random.choice, and return a subsetted data frame using the
The `ix` property is a bit tricky to figure out at first. For more information, see [the official docs][selecting].
rows = np.random.choice(kids.index, 5, replace=False) kids.ix[rows]
To make things a bit more concise, you can `import np.random.choice as choice`, which will allow you to simply use `choice()` without including the library *and* module when typing.
This can also be done in a single line:
kids.ix[np.random.choice(kids.index, 5, replace=False)]
The results returned by the above methods will never contain the same case more than once (because we told the function not to sample with replacement), just as if you were dealing cards from a shuffled deck. In contrast, ‘re-sampling with replacement’ replaces each case after it is dealt so that it can appear more than once in the result. You wouldn’t want to do this to select from a sampling frame, but it turns out that there are valuable statistical uses for this sort of sampling with replacement. You’ll make use of re-sampling in Tutorial 5 (Conﬁdence Intervals).
np.random.seed(1237) # Set seed so results are reproducible kids.ix[np.random.choice(kids.index, 5, replace=True)]
Notice that ‘Glen’ was sampled twice.
As with all ‘Statistical Modeling: A Fresh Approach for Python’ tutorials, this tutorial is based directly on material from ‘Statistical Modeling: A Fresh Approach (2nd Edition)’ by Daniel Kaplan. This tutorial is based on Chapter 2: Data: Cases, Variables, Samples.
I have made an effort to keep the text and explanations consistent between the original (R-based) version and the Python tutorials, in order to keep things comparable. With that in mind, any errors, omissions, and/or differences between the two versions are mine, and any questions, comments, and/or concerns should be directed to me.