This article focuses on the serialization and conversion methods of a Python DataFrame:
to_parquet()
,to_feather()
,to_string()
,Styler
.
Let’s get started!
Preparation
Before any data manipulation can occur, three (3) new libraries will require installation.
- The Pandas library enables access to/from a DataFrame.
- The Pyarrow library allows writing/reading access to/from a parquet file.
- The Openpyxl library allows styling/writing/reading to/from an Excel file.
To install these libraries, navigate to an IDE terminal. At the command prompt ($
), execute the code below. For the terminal used in this example, the command prompt is a dollar sign ($
). Your terminal prompt may be different.
$ pip install pandas
Hit the <Enter>
key on the keyboard to start the installation process.
$ pip install pyarrow
Hit the <Enter>
key on the keyboard to start the installation process.
$ pip install openpyxl
Hit the <Enter>
key on the keyboard to start the installation process.
If the installations were successful, a message displays in the terminal indicating the same.
Feel free to view the PyCharm installation guide for the required libraries.
- How to install Pandas on PyCharm
- How to install Pyarrow on PyCharm
- How to install Openpyxl on PyCharm
Add the following code to the top of each code snippet. This snippet will allow the code in this article to run error-free.
import pandas as pd import pyarrow import openpyxl
DataFrame.to_parquet()
The to_parquet()
method writes the DataFrame object to a parquet file.
Apache Parquet is a columnar storage format available to any project in the Hadoop ecosystem, regardless of the choice of data processing framework, data model, or programming language.
https://parquet.apache.org/
The syntax for this method is as follows:
DataFrame.to_parquet(path=None, engine='auto', compression='snappy', index=None, partition_cols=None, storage_options=None, **kwargs)
Consider the following description of the parameters of this method:
Parameter | Description |
---|---|
path | This parameter is the string path to write. If empty, a string returns. |
engine | This parameter is the Parquet library to use as the engine. The options are 'auto' , 'pyarrow' , or 'fastparquet' . |
compression | The compression to use. The options are:'snappy', 'gzip', 'brotli' , or None . |
index | If True the index(es) of the DataFrame will be written. |
partition_cols | If set, the column name(s) for the dataset partition. |
storage_options | This parameter contains extra options (dictionary format), such as host, port, username, etc. |
**kwargs | Additional parameters for the Parquet library. |
Rivers Clothing would like an Employee Report extracted from their existing emplyees.csv
file.
This report will contain the top ten (10) earners and will save to a Parquet file.
df = pd.read_csv('employees.csv', usecols=['EMPLOYEE_ID', 'EMAIL', 'SALARY']) df.sort_values(by='SALARY', ascending=False, inplace=True) df = df.head(10) print(df) df['EMAIL'] = df['EMAIL'].apply(lambda x: "{}{}".format(x.lower(), '@rivers.com')) df['SALARY'] = df['SALARY'].apply(lambda x: "${:,.2f}".format(x)) df.to_parquet('top_ten.gzip', compression='gzip') result = pd.read_parquet('top_ten.gzip') print(result)
- Line [1] reads in three (3) columns and all rows from the CSV file. The output saves to the DataFrame
df
. - Line [2] sorts the DataFrame based on the Salary (highest-lowest). The sort results apply to the original DataFrame.
- Line [3] trims down the DataFrame to the top ten (10) rows.
- Line [4] outputs the DataFrame to the terminal
- Line [5] formats the EMAIL column to lowercase and appends
'@rivers.com'
to each EMAIL address. - Line [6] formats the SALARY column to a currency format.
- Line [7] converts the DataFrame to a Parquet file, compresses, and saves it to
top_ten.zip
. - Line [8] reads in the newly created
top_ten.zip
file and saves it to theresult
variable. - Line [9] outputs the result to the terminal.
Output – df
(without formatting)
EMPLOYEE_ID | SALARY | ||
9 | 100 | SILVER | 24000 |
11 | 102 | LINDSAY | 17000 |
10 | 101 | NICHOLS | 17000 |
3 | 201 | MARSH | 13000 |
17 | 108 | GREEN | 12008 |
7 | 205 | HIGGINS | 12008 |
23 | 114 | ROGERS | 11000 |
6 | 204 | JOHNSON | 10000 |
18 | 109 | FOREST | 9000 |
12 | 103 | ARNOLD | 9000 |
Output – top_ten.zip
(formatted)
EMPLOYEE_ID | SALARY | ||
9 | 100 | silver@rivers.com | $24,000.00 |
11 | 102 | hlindsay@rivers.com | $17,000.00 |
10 | 101 | mnichols@rivers.com | $17,000.00 |
3 | 201 | dmarsh@rivers.com | $13,000.00 |
17 | 108 | cgreen@rivers.com | $12,008.00 |
7 | 205 | bhiggins@rivers.com | $12,008.00 |
23 | 114 | drogers@rivers.com | $11,000.00 |
6 | 204 | bjohnson@rivers.com | $10,000.00 |
18 | 109 | dforest@rivers.com | $9,000.00 |
12 | 103 | varnold@rivers.com | $9,000.00 |
DataFrame.to_feather()
The to_feather()
method writes a DataFrame object to a binary Feather format. This format is a lightweight and fast binary way to store a DataFrame. In addition, it takes up less space than an equivalent CSV file.
The syntax for this method is as follows:
DataFrame.to_feather(path, **kwargs)
Here’s a description of the parameters:
Parameter | Description |
---|---|
path | This parameter is the string path to write. If empty, a string returns. |
**kwargs | Additional parameters for the pyarrow library. |
This example reads in the first five (5) rows from a semi-colon (;
) delimited CSV file (cars.csv
).
df = pd.read_csv('cars.csv', sep=';', usecols=['Name', 'MPG', 'Model']).head() df.to_feather('cars.feather') df = pd.read_feather('cars.feather') print(df)
- Line [1] reads in the first five (5) rows and three (3) columns from the CSV file. The output saves to
df
. - Line [2] converts the DataFrame to a Feather file (
cars.feather
). - Line [3] reads the Feather file (
cars.feather
) into a DataFrame. - Line [4] outputs the DataFrame to the terminal.
Output – cars.feather
Name | MPG | Model | |
0 | Chevrolet Chevelle Malibu | 18.0 | 70 |
1 | Buick Skylark 320 | 15.0 | 70 |
2 | Plymouth Satellite | 18.0 | 70 |
3 | AMC Rebel SST | 16.0 | 70 |
4 | Ford Torino | 17.0 | 70 |
DataFrame.to_string()
The to_string()
method converts a DataFrame object to a terminal-based tabbed output.
The syntax for this method is as follows:
DataFrame.to_string(buf=None, columns=None, col_space=None, header=True, index=True, na_rep='NaN', formatters=None, float_format=None, sparsify=None, index_names=True, justify=None, max_rows=None, max_cols=None, show_dimensions=False, decimal='.', line_width=None, min_rows=None, max_colwidth=None, encoding=None)
The respective parameters:
Parameter | Description |
---|---|
buf | This parameter is the file path/buffer to write. If empty, a string returns. |
columns | This parameter is the sub-set of columns to write. If empty, all columns write. |
col_space | This depicts the length of each column. |
header | This parameter writes out the column names. |
index | This parameter writes out the row (index) names. |
na_rep | This parameter represents the string value for missing data. |
formatters | This parameter is a formatter function to apply to elements by position/name. |
float_format | This parameter is a formatter for floating-point numbers. |
sparsify | If True and MultiIndex, display the key for each row. |
index_names | This parameter displays the index names. |
justify | This parameter determines the column alignment. |
max_rows | This determines the maximum number of rows to display. |
max_cols | This determines the maximum number of columns to display. |
show_dimensions | This parameter displays the dimensions of the DataFrame (total rows/columns). |
decimal | This parameter is the decimal separator, comma (, ) in Europe. |
line_width | This determines the width to wrap a line in characters. |
min_rows | The rows to display if totals rows > max_rows. |
max_colwidth | This determines the maximum width at which to truncate column characters. |
encoding | A string representation of encoding. The default value is UTF-8. |
This example reads in the countries.csv
file to a DataFrame. This DataFrame then converts to a string.
π‘ Note: Click here to save this CSV file. Then move it to the current working directory.
df = pd.read_csv('countries.csv').head(4) result = df.to_string() print(result)
- Line [1] reads in four (4) rows from the
countries.csv
file. The output saves to a DataFramedf
. - Line [2] converts the DataFrame to a string. The output saves to
result
. - Line [3] outputs the result to the terminal.
Output
Country | Capital | Population | Area | |
0 | Germany | Berlin | 83783942 | 357021 |
1 | France | Paris | 67081000 | 551695 |
2 | Spain | Madrid | 47431256 | 498511 |
3 | Italy | Rome | 60317116 | 301338 |
DataFrame Styler
The DataFrame Styler returns a Styler object. This object contains methods for styling file types, such as Excel, CSV, or HTML files.
For this example, the first 15 records of the finxters.csv
file are read to a DataFrame. This DataFrame applies styles and saves them to an Excel file.
π‘ Note: Click here to save this CSV file. Then move it to the current working directory.
df = pd.read_csv('finxters.csv', usecols=['FID', 'Username', 'Solved']).head(15) def color_rule(val): return ['background-color: #7FFFD4' if x >= 200 else 'background-color: #FFE4C4' for x in val] solved = df.style.apply(color_rule, axis=1, subset=['Solved']) solved.to_excel('users-styled.xlsx', engine='openpyxl')
- Line [1] reads in three (3) columns from the top 15 rows of the
finxters.csv
file. The output saves to a DataFramedf
. - Line [2-3] defines a function that checks if the total puzzles solved for each value in the Solved column is > 200 and styles accordingly
- Line [4] applies the style to the Solved column.
- Line [5] saves the output to
users-styled.xlsx
using theopenpyxl
engine.
Output – users-styled.xlsx
file
π‘ Note: Click here for a Finxters in-depth article on Excel and styling.
Further Learning Resources
This is Part 21 of the DataFrame method series.