Accessing PySpark from a Jupyter Notebook

Python

It’d be great to interact with PySpark from a Jupyter Notebook. This post describes how to get that set up. It assumes that you’ve installed Spark like this.

  1. Install the findspark package.
$ pip3 install findspark
  1. Make sure that the SPARK_HOME environment variable is defined
  2. Launch a Jupyter Notebook.
$ jupyter notebook
  1. Import the findspark package and then use findspark.init() to locate the Spark process and then load the pyspark module. See below for a simple example.

Categorically Variable