your image

The Three Stages of Data Analysis: Evaluating Raw Data | MethodSpace

methodspace
Related Topic
:- SQL MS-Excel Data Analysis

The Three Stages of Data Analysis: Evaluating Raw Data

 4 years ago Diana Aleman

Categories: Big DataData AnalysisMentorSpaceQuantitativeTools and Resources

Tags: 

 

Collecting, analyzing, and reporting with data can be daunting. The person SAGE Publishing — the parent of MethodSpace — turns to when it has questions is Diana Aleman – Editor Extraordinaire for SAGE Stats and U.S. Political Stats. And now she is bringing her trials, tribulations, and expertise with data to you in a brand new monthly blog, Tips with Diana. Stay tuned for Diana’s experiences, tips, and tricks with finding, analyzing and visualizing data. This is the first post in a series on data analysis. The next post, on cleaning your data, appears here. The final post, on summarizing your data, is here.

Starting a large-scale research project? Head to SAGE Research Method’s Project Planner for more guidance!

The basics

A friend I haven’t seen in a while asked me what I do for a living, and I talked about SAGE Stats and the work that goes into maintaining and building the collection. Instead of his eyes glazing over (like most people’s would) he asked me, “Ok. Not to seem like an idiot, but what is data analysis? Like what does it cover?” If you’ve had similar thoughts, never fear! I think I can safely say I’ve received multiple variations of this question before. My typical answer: what doesn’t it cover?

Data analysis covers everything from reading the source methodology behind a data collection to creating a data visualization of the statistic you have extracted. All the steps in-between include deciphering variable descriptions, performing data quality checks, correcting spelling irregularities, reformatting the file layout to fit your needs, figuring out which statistic is best to describe the data, and figuring out the best formulas and methods to calculate the statistic you want. Phew. Still with me?

These steps and many others fall into three stages of the data analysis process: evaluate, clean, and summarize.

Let’s take some time with Stage 1: Evaluate. We’ll get into Stages 2 and 3 in upcoming posts. Ready? Here we go…

The breakdown: Evaluate

Evaluating a data file is kind of like an episode of House Hunters: you need to explore a data file for structural or other flaws that would be a deal breaker for you. How old is this house? Is the construction structurally sound? Is there a blue print that I can look at?

Similarly, when evaluating a raw data file you have collected, you should consider the following questions and tips:

  • Read through the data dictionary, codebook, or record layout, which should detail what each field represents. Try not to immediately start playing with the data until you know what you’re looking at. You wouldn’t start renovation in your new house without reading the blue prints, right? You gotta know if that wall is load-bearing!
  • What irregularities does the methodology documentation detail and how may it have affected the data? What are the methodology notes that I should make transparent to the reader?
  • Is the raw data complete? That is, are there missing values for any records? (Missing values in the raw data can distort your calculations.)
  • What outliers exist in the data set? Do they make sense in the context of the data? For instance, a house price of $1.8 million in a neighborhood where houses don’t exceed $200K is probably a red flag.
  • Spot check the raw data. If the data set provides totals, then sum the values and check that they match. If they don’t, then does the documentation explain why they may not add up to the totals?

When spot checking, it’s good to check a data point that you may be familiar with. E.g. for geographic data, checking the data for your home state and other states that you are more familiar with will enable you to spot something weird and off faster than if you check something random.

So if the source is good, then the data must be good too. Right?

It’s a mistake to assume the data is authoritative or fine as is just because it’s a published government source or another source you consider just as reliable. Data reporting is susceptible to manipulation and simple mistakes despite the best efforts and intentions of the responsible organizations. Assume nothing and evaluate the data to ensure it checks out! The next stage of data analysis is how to clean raw data to fit your needs. Stay tuned for my next post, where I will review the most effective Excel tips and tricks I’ve learned to help you in your own work!

 

The Washington Post has compiled incident-level data on police shootings since 2015 with the help of crowdsourcing. This is an impressive feat, but as I evaluated the raw data they provide, I walked away with several questions:

  • Are missing values due to underreporting by police?
  • What are the original sources for each incident?
  • Do they distinguish between neighborhoods in cities or just use major cities?

Together, these questions helped me decide that the Post’s data was not suitable for use in SAGE Stats quite yet

Comments