Pandas DataFrame to_parquet() Method


Preparation

Before any data manipulation can occur, three (3) new libraries will require installation.

  • The Pandas library enables access to/from a DataFrame.
  • The Pyarrow library allows writing/reading access to/from a parquet file.
  • The Openpyxl library allows styling/writing/reading to/from an Excel file.

To install these libraries, navigate to an IDE terminal. At the command prompt ($), execute the code below. For the terminal used in this example, the command prompt is a dollar sign ($). Your terminal prompt may be different.

$ pip install pandas

Hit the <Enter> key on the keyboard to start the installation process.

$ pip install pyarrow

Hit the <Enter> key on the keyboard to start the installation process.

$ pip install openpyxl

Hit the <Enter> key on the keyboard to start the installation process.

If the installations were successful, a message displays in the terminal indicating the same.


Feel free to view the PyCharm installation guide for the required libraries.


Add the following code to the top of each code snippet. This snippet will allow the code in this article to run error-free.

import pandas as pd
import pyarrow
import openpyxl

DataFrame.to_parquet()

The to_parquet() method writes the DataFrame object to a parquet file.

Apache Parquet is a columnar storage format available to any project in the Hadoop ecosystem, regardless of the choice of data processing framework, data model, or programming language.

https://parquet.apache.org/

The syntax for this method is as follows:

DataFrame.to_parquet(path=None, engine='auto', compression='snappy', index=None, partition_cols=None, storage_options=None, **kwargs)

Consider the following description of the parameters of this method:

ParameterDescription
pathThis parameter is the string path to write. If empty, a string returns.
engineThis parameter is the Parquet library to use as the engine. The options are 'auto', 'pyarrow', or 'fastparquet'.
compressionThe compression to use. The options are:
'snappy', 'gzip', 'brotli', or None.
indexIf True the index(es) of the DataFrame will be written.
partition_colsIf set, the column name(s) for the dataset partition.
storage_optionsThis parameter contains extra options (dictionary format), such as host, port, username, etc.
**kwargsAdditional parameters for the Parquet library.

Rivers Clothing would like an Employee Report extracted from their existing emplyees.csv file.

This report will contain the top ten (10) earners and will save to a Parquet file.

df = pd.read_csv('employees.csv', usecols=['EMPLOYEE_ID', 'EMAIL', 'SALARY'])
df.sort_values(by='SALARY', ascending=False, inplace=True)
df = df.head(10)
print(df)

df['EMAIL'] = df['EMAIL'].apply(lambda x: "{}{}".format(x.lower(), '@rivers.com'))
df['SALARY'] = df['SALARY'].apply(lambda x: "${:,.2f}".format(x))

df.to_parquet('top_ten.gzip', compression='gzip')
result = pd.read_parquet('top_ten.gzip')  
print(result)
  • Line [1] reads in three (3) columns and all rows from the CSV file. The output saves to the DataFrame df.
  • Line [2] sorts the DataFrame based on the Salary (highest-lowest). The sort results apply to the original DataFrame.
  • Line [3] trims down the DataFrame to the top ten (10) rows.
  • Line [4] outputs the DataFrame to the terminal
  • Line [5] formats the EMAIL column to lowercase and appends '@rivers.com' to each EMAIL address.
  • Line [6] formats the SALARY column to a currency format.
  • Line [7] converts the DataFrame to a Parquet file, compresses, and saves it to top_ten.zip.
  • Line [8] reads in the newly created top_ten.zip file and saves it to the result variable.
  • Line [9] outputs the result to the terminal.

Output – df (without formatting)

EMPLOYEE_IDEMAILSALARY
9100SILVER24000
11102LINDSAY17000
10101NICHOLS17000
3201MARSH13000
17108GREEN12008
7205HIGGINS12008
23114ROGERS11000
6204JOHNSON10000
18109FOREST9000
12103ARNOLD9000

Output – top_ten.zip (formatted)

EMPLOYEE_IDEMAILSALARY
9100silver@rivers.com$24,000.00
11102hlindsay@rivers.com$17,000.00
10101mnichols@rivers.com$17,000.00
3201dmarsh@rivers.com$13,000.00
17108cgreen@rivers.com$12,008.00
7205bhiggins@rivers.com$12,008.00
23114drogers@rivers.com$11,000.00
6204bjohnson@rivers.com$10,000.00
18109dforest@rivers.com$9,000.00
12103varnold@rivers.com$9,000.00

More Pandas DataFrame Methods

Feel free to learn more about the previous and next pandas DataFrame methods (alphabetically) here:

Also, check out the full cheat sheet overview of all Pandas DataFrame methods.