I read this article http://www.russellspitzer.com/2017/05/19/Spark-Sql-Thriftserver/ and became confused. It states:
Spark Contexts are also unable to share cached resources amongst each other. This means that unless you have a single Spark Context, it is impossible for multiple users to share a cached data. The Spark Thrift server can be that “single context,” providing globally-available cache.
Spark context for the thrift server on SO states:
The Spark Context in thrift server is just one. Spark Thrift server is not suitable for high concurrent application access.
Tableau et al are using the SIMBA connection to Spark SQL, but given the above conflicting statements, what is one to conclude? Moreover, users firing SQL statements will not benefit from caching as they have all their own SQL. I only thing caching could work if pre-caching from beeline. Or is this not true?