How big should be the big data to be executed on Apache Spark?

Keywords´╝Ü apache-spark bigdata


What should be the size of data while working with Apache Spark ? Is it useful to execute a python code on spark cluster with MB's of data? Will it decrease the execution time of the code on spark as compared to local execution?