error: filter spark dataframe on col value

Keywords: scala apache-spark dataframe


Please refer to my sample code below: sampleDf -> my sample scala dataframe that I want to filter on 2 columns "startIPInt" and "endIPInt".

var row = sampleDf.filter("startIPInt <=" + ip).filter("endIPInt >= " + ip)

I want to now view the content of this row. The following takes barely a sec to execute but does not show me the contents of this row object.


But this code takes too long to execute->

So my question is how should I view the content of this row object? Or is there any issue with the way I am filtering my dataframe?

My initial approach was to use filter as mentioned ->

according to that, the following line of code gives me error that "overloaded method 'filter'":

var row = sampleDf.filter($"startIPInt" <= ip).filter($"endIPInt" >= ip)

Can anyone help me understand what is happening here? And which is the right and fastest way to filter and get content of a dataframe as above.