- Perform data ingestions, writing of MapReduce jobs.
- Design, develop, enhance, debug, and implement J2EE, Angular, Spring-based applications. Perform application requirement analysis and estimation of new requirements.
- Address problems of system integration, compatibility, and multiple platforms and defects encountered in System Testing and UAT.
- Work with Project Manager/ Business Analyst to gather the requirements of user stories with client
- Develop and deliver the artifacts in Agile methodology
- Ability to adapt quickly to an existing, complex environment and learn new concepts / software technologies as needs arise
- Adaptable and flexible meet demands, being relentless and passionate to get the job done
- Collaborative team player with communication skills to match
- Strong project and time management skills including multi-tasking in fast-paced development environments while keeping attention to details, high standards for quality
- Minimum of 5 years of experience with the Hadoop ecosystem and Big Data technologies using Java/J2EE
- Minimum of 3 years of experience with design strategies for developing scalable, resilient data lake - Data storage, partitioning, splitting, file types (Parquet, Avro, ORC)
- Minimum of 3 years of experience with Hadoop ecosystem - HDFS, MapReduce, HBase, Hive, Pig, Sqoop
- Minimum of 5 years of experience with SQL and MySQL.
- Minimum of 5 years of Back end technologies - JPA or Hibernate
- One or more data ingestion frameworks such as Kafka, Storm, Nifi
- Knowledge of Impala, MongoDB
- Knowledge of Scala, Python
- Public cloud (AWS/Azure/Google Cloud Platform) Hadoop cluster experience
- Google BigQuery, Cloud Dataflow/Apache Beam