Serialize DataFrame to Parquet, Feather, String, Styler

Rate this post

This article focuses on the serialization and conversion methods of a Python DataFrame:

  • to_parquet(),
  • to_feather(),
  • to_string(),
  • Styler.

Let’s get started!


Preparation

Before any data manipulation can occur, three (3) new libraries will require installation.

  • The Pandas library enables access to/from a DataFrame.
  • The Pyarrow library allows writing/reading access to/from a parquet file.
  • The Openpyxl library allows styling/writing/reading to/from an Excel file.

To install these libraries, navigate to an IDE terminal. At the command prompt ($), execute the code below. For the terminal used in this example, the command prompt is a dollar sign ($). Your terminal prompt may be different.

$ pip install pandas

Hit the <Enter> key on the keyboard to start the installation process.

$ pip install pyarrow

Hit the <Enter> key on the keyboard to start the installation process.

$ pip install openpyxl

Hit the <Enter> key on the keyboard to start the installation process.

If the installations were successful, a message displays in the terminal indicating the same.


Feel free to view the PyCharm installation guide for the required libraries.


Add the following code to the top of each code snippet. This snippet will allow the code in this article to run error-free.

import pandas as pd
import pyarrow
import openpyxl

DataFrame.to_parquet()

The to_parquet() method writes the DataFrame object to a parquet file.

Apache Parquet is a columnar storage format available to any project in the Hadoop ecosystem, regardless of the choice of data processing framework, data model, or programming language.

https://parquet.apache.org/

The syntax for this method is as follows:

DataFrame.to_parquet(path=None, engine='auto', compression='snappy', index=None, partition_cols=None, storage_options=None, **kwargs)

Consider the following description of the parameters of this method:

ParameterDescription
pathThis parameter is the string path to write. If empty, a string returns.
engineThis parameter is the Parquet library to use as the engine. The options are 'auto', 'pyarrow', or 'fastparquet'.
compressionThe compression to use. The options are:
'snappy', 'gzip', 'brotli', or None.
indexIf True the index(es) of the DataFrame will be written.
partition_colsIf set, the column name(s) for the dataset partition.
storage_optionsThis parameter contains extra options (dictionary format), such as host, port, username, etc.
**kwargsAdditional parameters for the Parquet library.

Rivers Clothing would like an Employee Report extracted from their existing emplyees.csv file.

This report will contain the top ten (10) earners and will save to a Parquet file.

df = pd.read_csv('employees.csv', usecols=['EMPLOYEE_ID', 'EMAIL', 'SALARY'])
df.sort_values(by='SALARY', ascending=False, inplace=True)
df = df.head(10)
print(df)

df['EMAIL'] = df['EMAIL'].apply(lambda x: "{}{}".format(x.lower(), '@rivers.com'))
df['SALARY'] = df['SALARY'].apply(lambda x: "${:,.2f}".format(x))

df.to_parquet('top_ten.gzip', compression='gzip')
result = pd.read_parquet('top_ten.gzip')  
print(result)
  • Line [1] reads in three (3) columns and all rows from the CSV file. The output saves to the DataFrame df.
  • Line [2] sorts the DataFrame based on the Salary (highest-lowest). The sort results apply to the original DataFrame.
  • Line [3] trims down the DataFrame to the top ten (10) rows.
  • Line [4] outputs the DataFrame to the terminal
  • Line [5] formats the EMAIL column to lowercase and appends '@rivers.com' to each EMAIL address.
  • Line [6] formats the SALARY column to a currency format.
  • Line [7] converts the DataFrame to a Parquet file, compresses, and saves it to top_ten.zip.
  • Line [8] reads in the newly created top_ten.zip file and saves it to the result variable.
  • Line [9] outputs the result to the terminal.

Output – df (without formatting)

EMPLOYEE_IDEMAILSALARY
9100 SILVER24000
11 102LINDSAY17000
10101 NICHOLS17000
3201 MARSH13000
17108 GREEN12008
7205 HIGGINS12008
23 114ROGERS 11000
6204JOHNSON10000
18 109 FOREST 9000
12103ARNOLD9000

Output – top_ten.zip (formatted)

EMPLOYEE_IDEMAILSALARY
9100 silver@rivers.com$24,000.00
11 102hlindsay@rivers.com $17,000.00
10101 mnichols@rivers.com$17,000.00
3201 dmarsh@rivers.com$13,000.00
17108 cgreen@rivers.com$12,008.00
7205 bhiggins@rivers.com$12,008.00
23 114drogers@rivers.com$11,000.00
6204bjohnson@rivers.com$10,000.00
18 109 dforest@rivers.com$9,000.00
12103varnold@rivers.com$9,000.00

DataFrame.to_feather()

The to_feather() method writes a DataFrame object to a binary Feather format. This format is a lightweight and fast binary way to store a DataFrame. In addition, it takes up less space than an equivalent CSV file.

The syntax for this method is as follows:

DataFrame.to_feather(path, **kwargs)

Here’s a description of the parameters:

ParameterDescription
pathThis parameter is the string path to write. If empty, a string returns.
**kwargsAdditional parameters for the pyarrow library.

This example reads in the first five (5) rows from a semi-colon (;) delimited CSV file (cars.csv).

df = pd.read_csv('cars.csv', sep=';', usecols=['Name', 'MPG', 'Model']).head()
df.to_feather('cars.feather')
df = pd.read_feather('cars.feather')
print(df)
  • Line [1] reads in the first five (5) rows and three (3) columns from the CSV file. The output saves to df.
  • Line [2] converts the DataFrame to a Feather file (cars.feather).
  • Line [3] reads the Feather file (cars.feather) into a DataFrame.
  • Line [4] outputs the DataFrame to the terminal.

Output – cars.feather

NameMPGModel
0Chevrolet Chevelle Malibu18.070
1Buick Skylark 320 15.070
2Plymouth Satellite18.070
3AMC Rebel SST16.070
4Ford Torino17.070

DataFrame.to_string()

The to_string() method converts a DataFrame object to a terminal-based tabbed output.

The syntax for this method is as follows:

DataFrame.to_string(buf=None, columns=None, col_space=None, header=True, index=True, na_rep='NaN', formatters=None, float_format=None, sparsify=None, index_names=True, justify=None, max_rows=None, max_cols=None, show_dimensions=False, decimal='.', line_width=None, min_rows=None, max_colwidth=None, encoding=None)

The respective parameters:

ParameterDescription
bufThis parameter is the file path/buffer to write. If empty, a string returns.
columnsThis parameter is the sub-set of columns to write.
If empty, all columns write.
col_spaceThis depicts the length of each column.
headerThis parameter writes out the column names.
indexThis parameter writes out the row (index) names.
na_repThis parameter represents the string value for missing data.
formattersThis parameter is a formatter function to apply to elements by position/name.
float_formatThis parameter is a formatter for floating-point numbers.
sparsifyIf True and MultiIndex, display the key for each row.
index_namesThis parameter displays the index names.
justifyThis parameter determines the column alignment.
max_rowsThis determines the maximum number of rows to display.
max_colsThis determines the maximum number of columns to display.
show_dimensionsThis parameter displays the dimensions of the DataFrame (total rows/columns).
decimalThis parameter is the decimal separator, comma (,) in Europe.
line_widthThis determines the width to wrap a line in characters.
min_rowsThe rows to display if totals rows > max_rows.
max_colwidthThis determines the maximum width at which to truncate column characters.
encodingA string representation of encoding. The default value is UTF-8.

This example reads in the countries.csv file to a DataFrame. This DataFrame then converts to a string.

💡 Note: Click here to save this CSV file. Then move it to the current working directory.

df = pd.read_csv('countries.csv').head(4)
result = df.to_string()
print(result)
  • Line [1] reads in four (4) rows from the countries.csv file. The output saves to a DataFrame df.
  • Line [2] converts the DataFrame to a string. The output saves to result.
  • Line [3] outputs the result to the terminal.

Output

CountryCapitalPopulationArea
0GermanyBerlin83783942357021
1FranceParis67081000551695
2SpainMadrid47431256498511
3ItalyRome60317116301338

DataFrame Styler

The DataFrame Styler returns a Styler object. This object contains methods for styling file types, such as Excel, CSV, or HTML files.

For this example, the first 15 records of the finxters.csv file are read to a DataFrame. This DataFrame applies styles and saves them to an Excel file.

💡 Note: Click here to save this CSV file. Then move it to the current working directory.

df = pd.read_csv('finxters.csv', usecols=['FID', 'Username', 'Solved']).head(15)

def color_rule(val):
    return ['background-color: #7FFFD4' if x >= 200 else 'background-color: #FFE4C4' for x in val]

solved = df.style.apply(color_rule, axis=1, subset=['Solved'])
solved.to_excel('users-styled.xlsx', engine='openpyxl')
  • Line [1] reads in three (3) columns from the top 15 rows of the finxters.csv file. The output saves to a DataFrame df.
  • Line [2-3] defines a function that checks if the total puzzles solved for each value in the Solved column is > 200 and styles accordingly
  • Line [4] applies the style to the Solved column.
  • Line [5] saves the output to users-styled.xlsx using the openpyxl engine.

Output – users-styled.xlsx file

💡 Note: Click here for a Finxters in-depth article on Excel and styling.


Further Learning Resources

This is Part 21 of the DataFrame method series.

  • Part 1 focuses on the DataFrame methods abs(), all(), any(), clip(), corr(), and corrwith().
  • Part 2 focuses on the DataFrame methods count(), cov(), cummax(), cummin(), cumprod(), cumsum().
  • Part 3 focuses on the DataFrame methods describe(), diff(), eval(), kurtosis().
  • Part 4 focuses on the DataFrame methods mad(), min(), max(), mean(), median(), and mode().
  • Part 5 focuses on the DataFrame methods pct_change(), quantile(), rank(), round(), prod(), and product().
  • Part 6 focuses on the DataFrame methods add_prefix(), add_suffix(), and align().
  • Part 7 focuses on the DataFrame methods at_time(), between_time(), drop(), drop_duplicates() and duplicated().
  • Part 8 focuses on the DataFrame methods equals(), filter(), first(), last(), head(), and tail()
  • Part 9 focuses on the DataFrame methods equals(), filter(), first(), last(), head(), and tail()
  • Part 10 focuses on the DataFrame methods reset_index(), sample(), set_axis(), set_index(), take(), and truncate()
  • Part 11 focuses on the DataFrame methods backfill(), bfill(), fillna(), dropna(), and interpolate()
  • Part 12 focuses on the DataFrame methods isna(), isnull(), notna(), notnull(), pad() and replace()
  • Part 13 focuses on the DataFrame methods drop_level(), pivot(), pivot_table(), reorder_levels(), sort_values() and sort_index()
  • Part 14 focuses on the DataFrame methods nlargest(), nsmallest(), swap_level(), stack(), unstack() and swap_axes()
  • Part 15 focuses on the DataFrame methods melt(), explode(), squeeze(), to_xarray(), t() and transpose()
  • Part 16 focuses on the DataFrame methods append(), assign(), compare(), join(), merge() and update()
  • Part 17 focuses on the DataFrame methods asfreq(), asof(), shift(), slice_shift(), tshift(), first_valid_index(), and last_valid_index()
  • Part 18 focuses on the DataFrame methods resample(), to_period(), to_timestamp(), tz_localize(), and tz_convert()
  • Part 19 focuses on the visualization aspect of DataFrames and Series via plotting, such as plot(), and plot.area().
  • Part 20 focuses on continuing the visualization aspect of DataFrames and Series via plotting such as hexbin, hist, pie, and scatter plots.
  • Part 21 focuses on the serialization and conversion methods from_dict(), to_dict(), from_records(), to_records(), to_json(), and to_pickles().
  • Part 22 focuses on the serialization and conversion methods to_clipboard(), to_html(), to_sql(), to_csv(), and to_excel().
  • Part 23 focuses on the serialization and conversion methods to_markdown(), to_stata(), to_hdf(), to_latex(), to_xml().
  • Part 24 focuses on the serialization and conversion methods to_parquet(), to_feather(), to_string(), Styler.
  • Part 25 focuses on the serialization and conversion methods to_bgq() and to_coo().