10 May
Senior Site Reliability / DevOps Engineer in Data Engineering and platform, 21-09027
California, Sanfrancisco , 94101 Sanfrancisco USA

Vacancy expired!

As a Senior Site Reliability / DevOps Engineer in Data Engineering and platform,

Role - Partner on the design the next implementation of Client secure, global data and insight architecture, building new Stream processing capabilities and operationalizing “Unified Data Acquisition and Processing (UDAP) platform”- Identify and resolve performance bottlenecks either proactively- Work with the customer support group as needed to resolve performance issues in the field- Explore automation opportunity and develop tools to automate some of the day to day operations tasks- Provide performance metrics and maintain dashboards to reflect production systems health- Conceptualize and implement proactive monitoring where possible to catch issues early- Experiment with new tools to streamline the development, testing, deployment, and running of our data pipelines.- Work with cross functional agile teams to drive projects through full development cycle.- Help the team improve with the usage of data engineering best practices.- Collaborate with other data engineering teams to improve the data engineering ecosystem and talent within Client.- Creatively solve problems when facing constraints, whether it is the number of developers, quality or quantity of data, compute power, storage capacity or just time.- Maintain awareness of relevant technical and product trends through self-learning/study, training classes and job shadowing.

All About Candidate: - At least Bachelor's degree in Computer Science, Computer Engineering or Technology related field or equivalent work experience

- Intermediate experience in Data Warehouse related projects in product or service based organization

- Foundational experience as a Site Reliability Engineering or DevOps Engineer

- Foundational experience overall with experience as a software engineer or software architect

- Experience solving for scalability, Performance and stability

- Expert knowledge of Linux operating systems and environment and Scripting (Shell and Python preferred)

- A deep expertise in your field of Software Engineering

- Expert at troubleshooting complex system and application stacks

- Operational Experience in Big Data Stacks ( Hadoop ecosystem, Spark is a plus)

- Operational Experience in real-time ,streaming and data pipelines relevant frameworks ( Kafka and NiFi is a plus) - Operational experience troubleshooting network/server communication- Experience with performance Tuning of Database Schemas, Databases, SQL, ETL Jobs, and related scripts- Expertise in enterprise metrics/monitoring with frameworks such as Splunk, Druid, Grafana- Experience with cloud computing services, particularly deploying and running services in Azure P- A belief in data driven analysis and problem solving and a proven track record in applying these principles- An organized approach the planning and execution of major projects

Name Of Your Group: Data platform

Team’s Main Responsibility: Data platform is focused on enabling insights into Client network and help build data-driven products by curating and preparing data in a secure and reliable manner. Moving to a “Unified and Fault-Tolerant Architecture for Data Ingestion and Processing” is critical to achieving this mission.

Describe The Culture Of Your Team: Innovation motivated , Data and insights driven, Agile , Self-organized and Self-learned data engineering culture

Describe Your Management Style: I adjust my management style to meet the needs of the people I’m managing.

Typical Work Day Look Like For This Contractor: - Identify and resolve performance bottlenecks either proactively- Work with the customer support group as needed to resolve performance issues in the field- Develop tools to automate some of the day to day operations tasks- Conceptualize and implement proactive monitoring where possible to catch issues early- Experiment with new tools to streamline the development, testing, deployment, and running of our data pipelines.

Level Of Competency Are You Looking For:[Please Use: Foundational, Intermediate, Or Advanced]

Foundational : - As a Site Reliability Engineering or DevOps Engineer- Experience as a software engineer or software architect- Experience solving for scalability, Performance and stability- Expert knowledge of Linux operating systems and environment and Scripting (Shell and Python preferred)

Intermediate : - Data Warehouse related projects in product or service-based organization- Operational Experience in Big Data Stacks ( Hadoop ecosystem, Spark is a plus)- Operational experience troubleshooting network/server communication- Experience with performance Tuning of Database Schemas, Databases, SQL, ETL Jobs, and related scripts- Expertise in enterprise metrics/monitoring with frameworks such as Splunk, Druid, Grafana

Advanced: - Experience with cloud computing services, particularly deploying and running services in Azure or AWS- Operational Experience in real-time, streaming and data pipelines relevant frameworks ( Kafka and NiFi is a plus)

Top 3 Required Technical Skills - Automation Dev Ops Engineering [Hadoop, Spark]- Operational experience in real-time, streaming and data pipelines relevant frameworks [NiFi or Airflow]- Java/ Scala and Python programming- PCF or Cloud experience in general.

Soft Skills Would You Like To See In A Candidate - Team working- Fluent communications- Self-organized

Describe How Success Will Be Measured During The Contract Peer review , regular performance review

Vacancy expired!


Related jobs

Report job