Logo ROOT  
Reference Guide
 
Loading...
Searching...
No Matches
distrdf001_spark_connection.py
Go to the documentation of this file.
1## \file
2## \ingroup tutorial_dataframe
3## \notebook -draw
4## Configure a Spark connection and fill two histograms distributedly.
5##
6## This tutorial shows the ingredients needed to setup the connection to a Spark
7## cluster, namely a SparkConf object holding configuration parameters and a
8## SparkContext object created with the desired options. After this initial
9## setup, an RDataFrame with distributed capabilities is created and connected
10## to the SparkContext instance. Finally, a couple of histograms are drawn from
11## the created columns in the dataset.
12##
13## \macro_code
14## \macro_image
15##
16## \date March 2021
17## \author Vincenzo Eduardo Padulano
18import pyspark
19import ROOT
20
21# Setup the connection to Spark
22# First create a dictionary with keys representing Spark specific configuration
23# parameters. In this tutorial we use the following configuration parameters:
24#
25# 1. spark.app.name: The name of the Spark application
26# 2. spark.master: The Spark endpoint responsible for running the
27# application. With the syntax "local[2]" we signal Spark we want to run
28# locally on the same machine with 2 cores, each running a separate
29# process. The default behaviour of a Spark application would run
30# locally on the same machine with as many concurrent processes as
31# available cores, that could be also written as "local[*]".
32#
33# If you have access to a remote cluster you should substitute the endpoint URL
34# of your Spark master in the form "spark://HOST:PORT" in the value of
35# `spark.master`. Depending on the availability of your cluster you may request
36# more computing nodes or cores per node with a similar configuration:
37#
38# sparkconf = pyspark.SparkConf().setAll(
39# {"spark.master": "spark://HOST:PORT",
40# "spark.executor.instances": <number_of_nodes>,
41# "spark.executor.cores" <cores_per_node>,}.items())
42#
43# You can find all configuration options and more details in the official Spark
44# documentation at https://spark.apache.org/docs/latest/configuration.html .
45
46# Create a SparkConf object with all the desired Spark configuration parameters
47sparkconf = pyspark.SparkConf().setAll(
48 {"spark.app.name": "distrdf001_spark_connection",
49 "spark.master": "local[2]",
50 "spark.driver.memory": "4g"}.items())
51# Create a SparkContext with the configuration stored in `sparkconf`
52sparkcontext = pyspark.SparkContext(conf=sparkconf)
53
54# Create an RDataFrame that will use Spark as a backend for computations
55df = ROOT.RDataFrame(1000, executor=sparkcontext)
56
57# Set the random seed and define two columns of the dataset with random numbers.
59df_1 = df.Define("gaus", "gRandom->Gaus(10, 1)").Define("exponential", "gRandom->Exp(10)")
60
61# Book an histogram for each column
62h_gaus = df_1.Histo1D(("gaus", "Normal distribution", 50, 0, 30), "gaus")
63h_exp = df_1.Histo1D(("exponential", "Exponential distribution", 50, 0, 30), "exponential")
64
65# Plot the histograms side by side on a canvas
66c = ROOT.TCanvas("distrdf001", "distrdf001", 800, 400)
67c.Divide(2, 1)
68c.cd(1)
70c.cd(2)
72
73# Save the canvas
74c.SaveAs("distrdf001_spark_connection.png")
75print("Saved figure to distrdf001_spark_connection.png")
ROOT::Detail::TRangeCast< T, true > TRangeDynCast
TRangeDynCast is an adapter class that allows the typed iteration through a TCollection.
ROOT's RDataFrame offers a modern, high-level interface for analysis of data stored in TTree ,...