create spark dataframe from pandas dataframes inside RDD

Keywords: pandas apache-spark pyspark


I'm trying to convert a pandas dataframe on each worker node into a spark dataframe across all worker nodes.


def read_file_and_process_with_pandas(filename):
    data =
    "some additional operations using pandas functionality"
    return data

filelist = ['file1.csv','file2.csv','file3.csv']
rdd = sc.parallelize(filelist)
rdd =

Now I have an rdd of pandas dataframes. How can I convert this into a spark dataframe?

I tried doing rdd =, but when I do something like rdd.take(5), i get the following error:

PicklingError: Could not serialize object: Py4JError: An error occurred while calling o103.__getnewargs__. Trace:
py4j.Py4JException: Method __getnewargs__([]) does not exist
    at py4j.reflection.ReflectionEngine.getMethod(
    at py4j.reflection.ReflectionEngine.getMethod(
    at py4j.Gateway.invoke(
    at py4j.commands.AbstractCommand.invokeMethod(
    at py4j.commands.CallCommand.execute(

Is there a way to convert pandas dataframes in each worker node into a distributed dataframe?