Problem Scenario 69 : Write down a Spark Application using Python,
In which it read a file "Content.txt" (On hdfs) with following content.
And filter out the word which is less than 2 characters and ignore all empty lines.
Once doen store the filtered data in a directory called "problem84" (On hdfs)
Content.txt
Apache Spark Training
This is Spark Learning Session
Spark is faster than MapReduce
Currently there are no comments in this discussion, be the first to comment!