Experience: 3+ Years
Job Description: This role is for an experienced Big Data Engineer, who is passionate about building big data infrastructure, ingest a huge amount of data into Big Data Cluster, and get involved in developing analytics applications around it. The right candidate will be part of a collaborative team involved in support of existing client applications, as well as re-platform of the existing applications using Big Data technologies.
Required Skills and Experience :
- 3+ years of relevant experience in BIG DATA
- Exposure to Hortonworks/Cloudera production implementations
- Knowledge of Linux and shell scripting is a must
- Sound knowledge on Python or Scala
- Sound knowledge on Spark, HDFS/HIVE/HBASE
- Thorough understanding of Hadoop, Spark, and ecosystem components
- Must be proficient with data ingestion tools like sqoop, flume, talend, and Kafka.
- Candidates having knowledge of Machine Learning using Spark will be given preference
- Knowledge of Spark & Hadoop is a must.
- Knowledge of AWS and Google Cloud Platform and their various components are preferable