Senior Software Engineer - Big Data with Optimization
Vacancy expired!
Job Description
PubMatic BigData Engineering group is responsible for building scalable, fault-tolerant, highly available & awesome big data platform handling PB’s of data across hundreds of millions of users that is behind PubMatic Analytics (Refer:http://www.pubmatic.com/analytics.php) for various analytics offerings. We work with large data volume flowing daily in PubMatic data centers in various geographies & build platform that ingest & process to perform reporting & analytics both real-time & batch for our internal & external customers.
We are looking for Sr. Staffresponsible for delivering key solutions & major components of our data platform specific to the areas of high volume data ingestion & real-time processing.
Responsibilities- Design, manage, innovate the Big Data platform, big data infrastructure, and big data workflow at PubMatic, scaling to 10PB in size, across multiple data centers, geographies and time zones.
- Optimization, with Rapid Library, Scala, Spark & Data Processing
- You will be responsible to lead and participate in innovation, design and development of automated optimization of SQL and Big Data workloads running on-premise
- Troubleshoot complex issues discovered in-house as well as in customer environments.
- Cultivate sustained innovation to deliver exceptional products to customers
- Ensure timely and top quality product delivery
- Ensure that the end product is fully and correctly defined and documented
- Ensure implementation/continuous improvement of formal processes to support product development activities
- Drive the architecture/design decisions needed to achieve cost-effective and high performance results
- Conduct feasibility analysis, produce functional and design specifications of proposed new features.
- 4+ years coding experience in Java, with solid CS fundamentals including data structure and algorithm design, and creation of architectural specifications.
- 3+ years contributing to R&D and production deployments of large backend systems, with at least 2 years supporting big data use cases.
- 2+ years of experience designing and implementing data processing pipelines with a combination of the following technologies: Hadoop, Map Reduce, YARN, Spark, Hive, Kafka, Avro, SQL and NoSQL data warehouses.
- Proven experience in handling at least 1-2 complex Big Data projects
- Expertise handling Hadoop Ecosystems on Linux platform
- Implementation of professional software engineering best practices for the full software development life cycle, including coding standards, code reviews, source control management, documentation, build processes, automated testing, and operations.
- A passion for developing and maintaining a high quality code and test base, and enabling contributions from engineers across the team.
- Demonstrated ability to achieve stretch goals in a very innovative and fast paced environment.
- Demonstrated ability to learn new technologies quickly and independently.
- Excellent verbal and written communication skills, especially in technical communications.
- Strong inter-personal skills and a desire to work collaboratively.
- Recent experience in working with Startups is highly preferred.
- Experience in handling Engineering escalations from customers will be preferred
- Demonstrated ability to motivate and innovate, excellent team building and leadership skills
- Able to communicate clearly and effectively with all levels
- Strong operational and project management skills in a product development environment
Qualifications
- Minimum experience: 5years
- Bachelors or Master’s Degree in Engineering
Additional Information
All your information will be kept confidential according to EEO guidelines.
Vacancy expired!