Accessing PySpark from a Jupyter Notebook
Posted by Andrew B. Collier on 2017-07-04.
It’d be great to interact with PySpark from a Jupyter Notebook. This post describes how to get that set up. It assumes that you’ve installed Spark like this.
- Install the
- Make sure that the
SPARK_HOMEenvironment variable is defined
- Launch a Jupyter Notebook.
- Import the
findsparkpackage and then use
findspark.init()to locate the Spark process and then load the
pysparkmodule. See below for a simple example.