Configure the PySpark agent
Modify ./kensu-spark-example/conf.ini. Add your Ingestion Token where it says ingestion_token. Put the ingestion_url. You can get those tokens from Getting Credentials.
This is what the code looks like before we add Kensu Spark. It creates a regular Spark DataFrame by merging 3 csv files and saves it in a Parquet file.
To include Kensu in your program, follow these steps:
1️⃣ Modify the libraries imports to import Kensu modules
Note that we import the regular Spark SQL libraries. We also import kensu.pyspark.
2️⃣ Init Kensu in your code
Here we create an instance of Kensu. We include the jar file that we passed to spark-submit.
3️⃣ Send Metadata to Kensu
As with regular Spark, the DataFrames in the code are Spark DataFrames. The code uses regular Spark functions to read the .csv files and joins them.
The program saves the created DataFrame as a Parquet File with df.write.mode() . This sends the metadata to Kensu, like the DataSources, Schemas, and observability metrics.
Here is the complete code after the modifications.



