Data Streaming Engineer -Kafka Note: Day 1 onsite and later 3 months hybrid mode. However, if you come across any candidates exceptional and want a day 1 hybrid only pls share the profile Technical experience:
- Design and recommend the best approach suited for data movement to/from different sources using Apache/Confluent Kafka.
- Good understanding of Event-based architecture, messaging frameworks and stream processing solutions using Kafka Messaging framework.
- Hands-on experience working on Kafka connect using schema registry in a high-volume environment.
- Strong knowledge and exposure to Kafka brokers, zookeepers, KSQL, KStream and Kafka Control centre.
- Good knowledge of big data ecosystem to design and develop capabilities to deliver solutions using CI/CD pipelines.
- Skilled experience in writing and troubleshooting Python/PySpark scripts to generate extracts, cleanse, conform and deliver data for consumption
- Strong working knowledge of the AWS Data analytics eco-system such as AWS Glue, S3, Athena, SQS etc.
- Good understanding of other AWS services such as CloudWatch monitoring, scheduling and automation services
- Good experience working on Kafka connectors such as MQ connectors, Elastic Search connectors, JDBC connectors, File stream connectors, JMS source connectors, Tasks, Workers, converters, and Transforms.
- Working knowledge on Kafka Rest proxy and experience on custom connectors using the Kafka core concepts and API.
- Create topics, set up redundancy cluster, deploy monitoring tools, and alerts, and has good knowledge of best practices.
- Develop and ensure adherence to published system architectural decisions and development standards
- Ability to perform data-related benchmarking, performance analysis and tuning.
- Understanding of Data warehouse architecture and data modelling
- Strong skills in In-memory applications, Database Design, and Data Integration
- Ability to guide and mentor team members on using Kafka.