error: filter spark dataframe on col value

Keywords: scala apache-spark dataframe

Question: 

Please refer to my sample code below: sampleDf -> my sample scala dataframe that I want to filter on 2 columns "startIPInt" and "endIPInt".

var row = sampleDf.filter("startIPInt <=" + ip).filter("endIPInt >= " + ip)

I want to now view the content of this row. The following takes barely a sec to execute but does not show me the contents of this row object.

println(row)

But this code takes too long to execute->

row.show()

So my question is how should I view the content of this row object? Or is there any issue with the way I am filtering my dataframe?

My initial approach was to use filter as mentioned -> https://spark.apache.org/docs/1.5.0/api/java/org/apache/spark/sql/DataFrame.html#filter(java.lang.String)

according to that, the following line of code gives me error that "overloaded method 'filter'":

var row = sampleDf.filter($"startIPInt" <= ip).filter($"endIPInt" >= ip)

Can anyone help me understand what is happening here? And which is the right and fastest way to filter and get content of a dataframe as above.

Thanks

Answers: