02 Mar
Big Data Hadoop Developer with Kafka
North Carolina, Charlotte , 28201 Charlotte USA

Vacancy expired!

We (Synechron, Inc) are looking to hire for the role of

Big Data Hadoop Developer with Kafka. This role is long-term and based in

Charlotte, NC.

About Synechron:Synechron is one of the fastest-growing digital, business consulting & technology firms in the world. Headquartered in New York and with 22 offices around the world, Synechron is a leading Digital Transformation consulting firm and is working to Accelerate Digital initiatives for banks, asset managers, and insurance companies around the world. Synechron uniquely delivers these firms an end-to-end Digital, Consulting and Technology capabilities with expertise in wholesale banking, wealth management and insurance as well as emerging technologies like Blockchain, Artificial Intelligence, and Data Science. This has helped the company to grow to $650 Million+ in annual revenue and 10,000+ employees, and we’re continuing to invest in research and development in the form of Accelerators (prototype applications) developed in our global Financial Innovation Labs (FinLabs).

Learn more at: http://synechron.com/technology

Role: Big Data Hadoop Developer with Kafka

Location: Charlotte, NC

Duration: Long Term Project

Description:Develops, enhances, debugs, supports, maintains and tests software applications that support business units or supporting functions.These application program solutions may involve diverse development platforms, software, hardware, technologies and tools.
  • Participates in the design, development and implementation of complex applications using new technologies.
  • May provide technical direction and system architecture for individual initiatives.
  • Serves as a fully seasoned/proficient technical resource.
  • May collaborate with external programmers to coordinate delivery of software application.
  • Routine accountability is for technical knowledge and capabilities.
  • Should be able to work independently under minimal supervision, with general guidance from more seasoned consultants.
  • The candidate is expected to liaise with the business analysts, and other technology delivery manager.
  • Solid understanding on OOP languages and must have working experience in C, core java, J2EE
  • Should have good Knowledge on Hadoop Cluster Architecture Hands on Hadoop and the Hadoop ecosystem required
  • Proven experience within CLOUDERA Hadoop ecosystems (MR1,MR2, HDFS, YARN, Hive, HBase, Sqoop, Pig, Hue, etc.)
  • Design and implement Apache Spark based real timestream processing data pipeline involving complex data processing Hands-on experience developing applications using BigData, Kafka, Cassandra, Apache Storm, Apache Sparkand related areas
  • Implement complex data processing algorithms in real-time with optimized and efficient manner using Scala/Java Knowledge of any one of the scripting languages, such as Python, Unix Shell Scripting or PERL etc., is essential for this position.
  • Excellent analytical & problem solving skills, willingness to take ownership and resolve technical challenges
  • Experience in performing Proof-Of-concept for new technologies
  • Must have experience working with external Venders/Partners like Cloudera, Datastax etc
  • Strong communication, documentation skills & technology awareness and capability to interact with technology leaders is a must
  • Good knowledge on Agile Methodology and the Scrum process.
  • Bachelor’s degree in Science or Engineering
  • 9+ year of Industry experience.
  • Minimum 5 - 7 years of Big Data experience
Thanks & RegardsVikarant Kumar

Vacancy expired!


Related jobs

Report job