18 Jun
Big Data/Hadoop Sr Data Engineer/Developer
North Carolina, Charlotte , 28201 Charlotte USA

Vacancy expired!

This job is remote (Till COVID).

Experience
  • 7 years of progressive, post-bachelor’s experience in Data Engineering.
  • Experience with using builds tools including one or more of the followings: Ant, Maven, and Gradle.
  • Strong experience with using IDE’s and Software development environments including one or more of the followings: Eclipse, NetBeans and Intellij Idea.
  • 5 years of experience in Big Data Solutions using technologies including one or more of the followings: Hadoop, Hive, HBase, MapReduce, Spark, Sqoop, Oozie, Java.
  • 5 years of experience applying agile development practices and working with distributed, component-based architectures.
  • 4 Years of experience in Spark/Scala both in batch and streaming
  • Strong experience in performance Tuning, troubleshooting application issues & analyzing production issues

Key Responsibilities:
  • Translates complex cross-functional business requirements and functional specifications into logical program designs, code modules, stable application systems, and data solutions; partners with Product Team to understand business needs and functional specifications
  • Contributes in the design and build of complex data solutions and ensures the architecture blueprint, standards, target state architecture, and strategies are aligned with the requirements.Participates in all software development end-to-end product lifecycle phases by applying and sharing an in-depth understanding of complex industry methodologies, policies, standards, and controls
  • Develops detailed architecture plans for large scale enterprise architecture projects and drives the plans to fruition

Data Engineering Responsibilities
  • Executes the development, maintenance, and enhancements of data ingestion solutions of varying complexity levels across various data sources like DBMS, File systems (structured and unstructured), APIs and Streaming on on-prem and cloud infrastructure; demonstrates strong acumen in Data Ingestion toolsets and nurtures and grows junior members in this capabilityBuilds, tests and enhances data Curation pipelines integration data from wide variety of sources like DBMS, File systems, APIs and streaming systems for various KPIs and metrics development with high data quality and integrity
  • Supports the development of feature / inputs for the data models in an Agile manner; Hosts Model Via Rest APIs; ensures non-functional requirements such as logging, authentication, error capturing, and concurrency management are accounted for when model hosting

BI Engineering Responsibilities
  • Responsible for the development, maintenance, and enhancements of BI solutions of varying complexity levels across different data sources like DBMS, File systems (structured and unstructured) on-prem and cloud infrastructure; creates level metrics and other complex metrics; use custom groups, consolidations, drilling, and complex filters
  • Demonstrates database skill (Teradata/Oracle/Db2/Hadoop) by writing views for business requirements; uses freeform SQLs and pass-through functions; analyzes and finds errors from SQL generation; creates RSD and dashboard
  • Responsible for building, testing and enhancement of BI solutions from a wide variety of sources like Teradata, Hive, Hbase, Google Big Query and File systems; develops solutions with optimized data performance and data security

Technical Competencies
  • Agile Development
  • Big Data Management and Analytics
  • Database Design (Physical)

QualificationsMinimum Qualifications:
  • Bachelor's Degree in Engineering /Computer Science, CIS, or related field (or equivalent work experience in a related field)
  • 5 years of experience in Data or BI Engineering, Data Warehousing/ETL, or Software Engineering
  • 4 years of experience working on project(s) involving the implementation of solutions applying development life cycles (SDLC)

Vacancy expired!


Related jobs

Report job