Disclaimer: originally I planned to write post about R functions/packages which allow to read data from hdfs (with benchmarks), but in the end it became more like an overview of SparkR capabilities.
Nowadays working with “big data” almost always means working with hadoop ecosystem. A few years ago this also meant that you also would have to be a good java programmer to work in such environment – even simple word count program took several dozens of lines of code. But 2-3 years ago things changed – thanks to Apache Spark with its concise (but powerful!) functional-style API. It is written in Scala, but also has java, python and recently R APIs.
I started to use Spark more than 2 years ago (and used it a lot). In most cases I use scala because
- JVM native
- the only fully featured – RDD level API, MLlib, GraphX, etc.
- nice REPL
- scala is well suited for data munging – good tradeoff between complexity and efficiency.
During this period I tried several times SparkR, but until version 1.6 it had too many rough edges. Starting from 1.6 it became a really useful tool for simple manipulations on spark data frames. Unfortunately we still do not have R user defined functions, so sparkR functionality is limited to built-in functions.
Common pipelene for data scientist can be the following:
- read data from hdfs
- do some data wrangling (join/filter/etc.)
- optionally take subset/sample and collect data to local R session for exploratory analysis and fitting models.
Lets have a closer look into these steps.
Reading data from hdfs
Files in hdfs are usually stored in the following formats:
- plain txt/csv/json files
- sequence files. You can think of them as serialized java objects. In recent years became less popular. Also they are not portable (need custom readers), so I do not find them interesting for this post.
- avro (row-based)
- paruqet (column-based)
- orc (column-based)
Good news is that Spark …read more
Source:: r-bloggers.com