Documentation

⌘K
Getting started with the Kensu Community Edition
Marketing campaign
Financial data report
Getting credentials
Recipe: Observe Your First Pipeline
Integration
Collectors
Connectors
Agents: getting started
Python
PySpark
Scala Spark
Databricks Notebook
Agent Listing
Docs powered by
Archbee
Agents: getting started
Python

Running your first Kensu Pandas program

2min

Assuming you downloaded the code as explained here, now run the script script_with_kensu.py

The code is below. To run it, you don't need to change the configuration file that you downloaded. The conf.ini file is configured with the default settings. The default settings write the data to the Kensu cloud, i.e., APIReporter.

In other words, the config file has all the [kensu.reporter] items commented out:

ini
|
[kensu.reporter]
; Name (class but conventional for now) of the reporter
;name can be ApiReporter, KafkaReporter, PrintReporter, LoggingReporter, FileReporter, MultiReporter
 

; Conf: MultiReporter
;reporters=["KafkaReporter", "PrintReporter", "LoggingReporter", "FileReporter"] 
 

; Conf: KafkaReporter
;bootstrap_servers=[]
;topic=kensu-events

;Conf: FileReporter
;file_name=jan_log_pandas_example.log
;level=DEBUG






Run the program like this: python script_with_kensu.py

Python
|
# setting up the conf file, for the getting started purpose we set the right configuration file for the month
import os
os.environ["KSU_CONF_FILE"] = "conf.ini"

# init the library using the conf file
from kensu.utils.kensu_provider import KensuProvider
kensu = KensuProvider().initKensu()

# small data preparation example to demonstrate the results of using kensu-py
import kensu.pandas as pd 
customers_info = pd.read_csv('data/customers-data.csv')
contact_info = pd.read_csv('data/contact-data.csv')
business_info = pd.read_csv('data/business-data.csv')

customer360 = customers_info.merge(contact_info,on='id')

monthly_ds = pd.merge(customer360,business_info)

monthly_ds.to_csv('data/data.csv',index=False)



The three DataFrames were merged. The data is written to the Kensu cloud by the pd.merge() function.

You need to be logged into the Kensu dashboard to see the data.

Updated 03 Mar 2023
Did this page help you?
PREVIOUS
Configure the Python agent
NEXT
PySpark
Docs powered by
Archbee
Docs powered by
Archbee