17 Jan
Senior Data Engineer - GO/Python/AWS
Ohio, Cincinnati , 45201 Cincinnati USA

Vacancy expired!

100% Remote!This Jobot Job is hosted by: Madeline LazarusAre you a fit? Easy Apply now by clicking the "Apply Now" button and sending us your resume.Salary: $140,000 - $160,000 per year

A bit about us:Sr. Data Engineer - ETL/AWSLocation: 100% REMOTESalary: $140,000 - $160,000You will work with business and technology leadership to understand the holistic business strategy and then work with product managers, domain architects, senior technologists, and business analysts to identify, define and develop solution specific, cloud-native, IT architectures and product deliverables to support the strategy.You will also lead, and actively participates in, the design and hands-on development of incubation and innovation projects including prototyping, proof-of-concept, new enterprise technology components, and the delivery of products to market.Sound interesting to you!? Please read on!

Why join us?
  • Cutting Edge Technology company in Healthcare industry
  • Vacation/PTO
  • Medical
  • Dental
  • Vision
  • Bonus
  • 401k

Job DetailsWhat you will be doing:New product development and enhancements leveraging technologies including but not limited to AWS EMR, Spark, Kafka, and other supporting AWS Services like Lambda, SNS, SQS, Glue, Athena etc.Design, develop and operate scalable, resilient data ingestion pipelines using open source big-data technologies. Ensure industry best practices are followed for data pipelines, metadata management, data quality, data governance and data privacyContinuously refactor the codebase to ensure maintainability, testability and performance. Actively perform code reviews and help evolve our code review guidelines to ensure quality code is shippedEstimate and plan for complex project level assignments even with some not flushed out requirements, able to significantly contribute to the scalability of a product in terms of performance, supportability, optimized costs.Assume role of technical lead on projects by evaluating design and support implementation and collaborate with function groups on work to be delivered. Able to recognize strengths and limitations of team members and adapt to leverage and mentor.Work closely with Product Management and other stakeholders to ensure that the features delivered are meeting our customer needsProvide continuous feedback, identify process improvement opportunities, openly communicate and collaborate to enhance team capabilitiesMust Have Skills: 8+ years professional experience in software development with modern programming languages such as Go or Python preferably or C#, Java. Strong backend programming skills for data processing, with practical knowledge of availability, scalability, clustering, microservices, multi-threaded development and performance patterns.5+ years of experience with Big Data pipelines with Spark in Java, Scala, Python. 5+ years professional experience with indexed data persistence such as Relational Databases (SQL) or NoSQL Data Stores.Proven track record of building scaled data platforms and enterprise products, working in large engineering teams.Experience with public Clouds such as AWS, Azure, Google Cloud PlatformExperience in leading a team in modular design, implementation, and testing. Ability to breakdown requirements into stories and provide estimates, perform code reviews, raise technical risk and create documentation.Bachelor's degree in Computer Science or equivalentNice to have: Experience with Amazon Web Services (Fargate, Lambda, Kinesis, CloudWatch, DynamoDB, ElastiCache, Athena, AWS EMR, Data Pipeline, Step Functions, AWS Batch, CloudFormation, RedShift, Glue etc).Experience building complex software systems that have been successfully delivered to customers.Experience working with large datasets and large-scale distributed computingExperience building data lakes and data warehousesExperience developing ETL data pipelines, and performance tuning themExperience using orchestration tools like AirFlow, KubeFlow or equivalentUnderstanding of data modeling and database theory (ACID, CAP etc.)Experience modelling real world data in both RDBMS(Postgres, SQL Server or equivalent) and NoSQL (MongoDB, DynamoDB, Redis or equivalent) persistence layersExperience building automated CI/CD pipelines using tools like Git, Azure DevOps or equivalentInterested in hearing more? Easy Apply now by clicking the "Apply Now" button.

Vacancy expired!


Related jobs

Report job