02 Nov
Data Architect
North Carolina, Charlotte

Vacancy expired!

Randstad Corporate Services - Data Architect

job summary:Data ArchitectLooking for Big Data Platform Architect to be a part of the development of next-generation enterprise big data platform that supports every line of business must be well-versed in building Big Data analytical platforms using containerization, cloud technologies and Big Data parallel processing technologies as well as the advantages, disadvantages and trade-offs of various architecture approaches. Candidate will be responsible for developing architecture and deployment plans for Big Data implementations. As a Big Data Platform Architect candidate should have sound knowledge and architecture experience with Big Data tools, Data Science tools and libraries, Machine Learning, Streaming Data and Enterprise Data Warehousing. Candidate should be able to help accelerate our customer's journey to Private Cloud by moving and improving existing Hadoop installations, modernizing their data lakes with emerging and proven industry trends. Must be familiar with agile methods and related SDLC tools.Responsibilities- Design and Develop Big Data architecture patterns on on-prem and Private Cloud platforms- Work on new product evaluation, certification, defining standards for tool fitment to the platform- Develop technical architecture for enabling Big Data services using industry best practices for large scale processing- Design, build, and automate Big Data solutions centered around the Kubernets container orchestration platform- Stand up architecture review, operating model, routines, and evaluation criteria for Big Data container platform adoption by applications- Maintain in-depth knowledge of the organization's technologies and architectures.- Ensure the reference architecture is optimized for larger workloads and come up with recommended tuning techniques- Develop standards and methodologies for benchmarking, performance, evaluation, testing, data security and data privacy- Communicate architectural decisions, plans, goals and strategies.- Participate in regular scrum calls to track progress, resolve issues, mitigate risks and escalate concerns in a timely manner- Contribute to the development, review, and maintenance of requirements documents, technical design documents and functional specificationsRequired Skill Set- Experience in Big Data/Analytics/Data science tools and a good understanding of the leading products in the industry are required along with passion, curiosity and technical depth- Thorough understanding and working experience in Cloudera/Horton Hadoop distributions- Solid functional understanding of the Big Data Technologies, Streaming and NoSQL databases- Experience in working with Big Data eco-system including tools such as YARN, Impala, Hive, Flume, HBase, Sqoop, Apache Spark, Apache Storm, Crunch, Java, Oozie, SQOOP, Pig, Scala, Python, Kerberos/Active Directory/LDAP- Experience in solving Streaming use cases using Spark,Kafka,NiFi- Thorough understanding, technical/architecture insight and working experience in Docker,Kubernets- Containerization experience with Big Data stack using Open Shift/Azure- Exposure to Cloud computing and Object Storage services/platforms- Experience with Big Data deployment architecture, configuration management, monitoring, debugging and security- Experience in performing Cluster Sizing exercise based on capacity requirements- Ability to build partnership with internal teams, vendors on resolving product gaps/issues and escalate to the management on timely manner- Good Exposure to CI/CD tools, application hosting, containerization concepts- Excellent verbal and written skills, Team skills, Proficient with MS Visio, analytical and problem solving skills- Must be a self-starter, excellent communication and interpersonal skills.- problem solving and analytical skills- Effective verbal and written communication skillsYears of experience required:5-7, 7-10 or 10+Top required IT/Technical skill-sets:Hands on architect with coding experience, Kubernetes, SME in one of the following: Kafka/Spark/NiFi/Ranger location: CHARLOTTE, North Carolinajob type: Contractsalary: $60 - 70 per hourwork hours: 8am to 5pmeducation: Bachelors responsibilities:- Design and Develop Big Data architecture patterns on on-prem and Private Cloud platforms- Work on new product evaluation, certification, defining standards for tool fitment to the platform- Develop technical architecture for enabling Big Data services using industry best practices for large scale processing- Design, build, and automate Big Data solutions centered around the Kubernets container orchestration platform- Stand up architecture review, operating model, routines, and evaluation criteria for Big Data container platform adoption by applications- Maintain in-depth knowledge of the organization's technologies and architectures.- Ensure the reference architecture is optimized for larger workloads and come up with recommended tuning techniques- Develop standards and methodologies for benchmarking, performance, evaluation, testing, data security and data privacy- Communicate architectural decisions, plans, goals and strategies.- Participate in regular scrum calls to track progress, resolve issues, mitigate risks and escalate concerns in a timely manner- Contribute to the development, review, and maintenance of requirements documents, technical design documents and functional specifications qualifications:

  • Experience level: Experienced
  • Minimum 3 years of experience
  • Education: Bachelors
skills:
  • Data Architect
  • Hadoop
Equal Opportunity Employer: Race, Color, Religion, Sex, Sexual Orientation, Gender Identity, National Origin, Age, Genetic Information, Disability, Protected Veteran Status, or any other legally protected group status.

Vacancy expired!