Preparation
Before any data manipulation can occur, three (3) new libraries will require installation.
- The Pandas library enables access to/from a DataFrame.
- The Pyarrow library allows writing/reading access to/from a parquet file.
- The Openpyxl library allows styling/writing/reading to/from an Excel file.
To install these libraries, navigate to an IDE terminal. At the command prompt ($
), execute the code below. For the terminal used in this example, the command prompt is a dollar sign ($
). Your terminal prompt may be different.
$ pip install pandas
Hit the <Enter>
key on the keyboard to start the installation process.
$ pip install pyarrow
Hit the <Enter>
key on the keyboard to start the installation process.
$ pip install openpyxl
Hit the <Enter>
key on the keyboard to start the installation process.
If the installations were successful, a message displays in the terminal indicating the same.
Feel free to view the PyCharm installation guide for the required libraries.
Add the following code to the top of each code snippet. This snippet will allow the code in this article to run error-free.
import pandas as pd import pyarrow import openpyxl
DataFrame.to_parquet()
The to_parquet()
method writes the DataFrame object to a parquet file.
Apache Parquet is a columnar storage format available to any project in the Hadoop ecosystem, regardless of the choice of data processing framework, data model, or programming language.
https://parquet.apache.org/
The syntax for this method is as follows:
DataFrame.to_parquet(path=None, engine='auto', compression='snappy', index=None, partition_cols=None, storage_options=None, **kwargs)
Consider the following description of the parameters of this method:
Parameter | Description |
---|---|
path | This parameter is the string path to write. If empty, a string returns. |
engine | This parameter is the Parquet library to use as the engine. The options are 'auto' , 'pyarrow' , or 'fastparquet' . |
compression | The compression to use. The options are:'snappy', 'gzip', 'brotli' , or None . |
index | If True the index(es) of the DataFrame will be written. |
partition_cols | If set, the column name(s) for the dataset partition. |
storage_options | This parameter contains extra options (dictionary format), such as host, port, username, etc. |
**kwargs | Additional parameters for the Parquet library. |
Rivers Clothing would like an Employee Report extracted from their existing emplyees.csv
file.
This report will contain the top ten (10) earners and will save to a Parquet file.
df = pd.read_csv('employees.csv', usecols=['EMPLOYEE_ID', 'EMAIL', 'SALARY']) df.sort_values(by='SALARY', ascending=False, inplace=True) df = df.head(10) print(df) df['EMAIL'] = df['EMAIL'].apply(lambda x: "{}{}".format(x.lower(), '@rivers.com')) df['SALARY'] = df['SALARY'].apply(lambda x: "${:,.2f}".format(x)) df.to_parquet('top_ten.gzip', compression='gzip') result = pd.read_parquet('top_ten.gzip') print(result)
- Line [1] reads in three (3) columns and all rows from the CSV file. The output saves to the DataFrame
df
. - Line [2] sorts the DataFrame based on the Salary (highest-lowest). The sort results apply to the original DataFrame.
- Line [3] trims down the DataFrame to the top ten (10) rows.
- Line [4] outputs the DataFrame to the terminal
- Line [5] formats the EMAIL column to lowercase and appends
'@rivers.com'
to each EMAIL address. - Line [6] formats the SALARY column to a currency format.
- Line [7] converts the DataFrame to a Parquet file, compresses, and saves it to
top_ten.zip
. - Line [8] reads in the newly created
top_ten.zip
file and saves it to theresult
variable. - Line [9] outputs the result to the terminal.
Output – df
(without formatting)
EMPLOYEE_ID | SALARY | ||
9 | 100 | SILVER | 24000 |
11 | 102 | LINDSAY | 17000 |
10 | 101 | NICHOLS | 17000 |
3 | 201 | MARSH | 13000 |
17 | 108 | GREEN | 12008 |
7 | 205 | HIGGINS | 12008 |
23 | 114 | ROGERS | 11000 |
6 | 204 | JOHNSON | 10000 |
18 | 109 | FOREST | 9000 |
12 | 103 | ARNOLD | 9000 |
Output – top_ten.zip
(formatted)
EMPLOYEE_ID | SALARY | ||
9 | 100 | silver@rivers.com | $24,000.00 |
11 | 102 | hlindsay@rivers.com | $17,000.00 |
10 | 101 | mnichols@rivers.com | $17,000.00 |
3 | 201 | dmarsh@rivers.com | $13,000.00 |
17 | 108 | cgreen@rivers.com | $12,008.00 |
7 | 205 | bhiggins@rivers.com | $12,008.00 |
23 | 114 | drogers@rivers.com | $11,000.00 |
6 | 204 | bjohnson@rivers.com | $10,000.00 |
18 | 109 | dforest@rivers.com | $9,000.00 |
12 | 103 | varnold@rivers.com | $9,000.00 |
More Pandas DataFrame Methods
Feel free to learn more about the previous and next pandas DataFrame methods (alphabetically) here:
Also, check out the full cheat sheet overview of all Pandas DataFrame methods.