25 Jan
Senior Software Engineer
Vacancy expired!
- First class compensation package (aggressive base, sign-on, annual bonus, etc.)
- Comprehensive benefits (dental/vision/health/life)
- 401K/Retirement plan contribution
- Maternal & paternity leave
- Generous PTO/vacation policy
- Paid holidays
- Tuition reimbursement plan
- Employee Stock Purchase plan and more!
- New product development and enhancements leveraging technologies including but not limited to AWS EMR, Spark, Kafka, and other supporting AWS Services like Lambda, SNS, SQS, Glue, Athena etc.
- Design, develop and operate scalable, resilient data ingestion pipelines using open source big-data technologies. Ensure industry best practices are followed for data pipelines, metadata management, data quality, data governance and data privacy
- Continuously refactor the codebase to ensure maintainability, testability and performance. Actively perform code reviews and help evolve our code review guidelines to ensure quality code is shipped
- Estimate and plan for complex project level assignments even with some not flushed out requirements, able to significantly contribute to the scalability of a product in terms of performance, supportability, optimized costs.
- Assume role of technical lead on projects by evaluating design and support implementation and collaborate with function groups on work to be delivered. Able to recognize strengths and limitations of team members and adapt to leverage and mentor.
- Work closely with Product Management and other stakeholders to ensure that the features delivered are meeting our customer needs
- Provide continuous feedback, identify process improvement opportunities, openly communicate, and collaborate to enhance team capabilities
- 5+ years of experience with Big Data tech stack including Hadoop, Spark, Python or Scala, Kafka, and NoSQL data stores
- 8+ years professional experience in software development with modern programming languages such as C#, Java, Go, Python, etc. Strong backend programming skills for data processing, with practical knowledge of availability, scalability, clustering, microservices, multi-threaded development and performance patterns.
- Enterprise level experience with AWS services such as AWS EMR, Data Pipeline, Step Functions, AWS Batch, Lambda, CloudFormation, RedShift, Glue etc.
- Experience working with large datasets and large-scale distributed computing
- Experience developing ETL data pipelines, and performance tuning them
- Experience modelling real world data in both RDBMS (Postgres, SQL Server or equivalent) and NoSQL (MongoDB, DynamoDB, Redis or equivalent) persistence layers
- Proven track record of building scaled data platforms and enterprise products, working in large engineering teams.
- Experience with public Clouds such as AWS, Azure, Google Cloud Platform
- Experience in leading a team in modular design, implementation, and testing. Ability to breakdown requirements into stories and provide estimates, perform code reviews, raise technical risk, and create documentation.
- Experience building automated CI/CD pipelines using tools like Git, Terraform, or equivalent
- Experience using orchestration tools like AirFlow, KubeFlow or equivalent
- Understanding of data modeling and database theory (ACID, CAP etc.)
- Experience building complex, scalable, performant, secure and reliable cloud native systems, preferably AWS.
- Bachelor's degree in Computer Science or equivalent
- Knowledge of professional software engineering practices and best practices for the full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations.
- Committed to current best practices, continuous deployment, 100% fault tolerance, high performance, infrastructure as code, and working in a Test Driven and DevOps organization.
- Healthcare experience, especially in the Payer area, is a plus
- Experience with HIPAA compliance and the security of PHI data is a plus
Vacancy expired!