26 Oct
Azure Data Integration Architect
Nevada, Las vegas , 89101 Las vegas USA

Vacancy expired!

Cognizant continuously seek outstanding associates when recruiting new employees. We pride ourselves on having extensive experience working with clients in all major markets. Cognizant's delivery model is infused with a distinct culture of high customer happiness. We consistently deliver positive relationships, cost reductions and business results. Are you ready to be a change-maker? About AI & Analytics: Artificial intelligence (AI) and the data it collects and analyzes will soon sit at the core of all intelligent, human-centric businesses. By decoding customer needs, preferences, and behaviors, our clients can understand exactly what services, products, and experiences their consumers need. Within AI & Analytics, we work to design the future-a future in which trial-and-error business decisions have been replaced by informed choices and data-supported strategies. By applying AI and data science, we help leading companies to prototype, refine, validate, and scale their AI and analytics products and delivery models. Cognizant's AIA practice takes insights that are buried in data, and provides businesses a clear way to transform how they source, interpret and consume their information. Our clients need flexible data structures and a streamlined data architecture that quickly turns data resources into insightful, actionable intelligence.You must be legally authorized to work in the United States without the need for employer sponsorship, now or at any time in the future. Position Responsibilities: Data Architect Azure Data Bricks Required Experience 1 Data EngineeringData Warehousing developmentoperations experience of at least 7 years 2 Cloud computing environment experience of at least 2 years 3 SQL development including PLSQL preferably SQL Server but other DBMS may be fine 4 Python PySpark development 5 MS Azure with ADF V2 ADF Dataflow Databricks SQLDBHyperscale SQLDW ADLS Gen 2 and other Azure services 6 Alternate to Azure AWS or Google Cloud Platform are acceptable with very strong deep Python PySpark experience 7 In absence of ADFDatabricks strong ETL tool experience like InformaticaDataStage may be fine 8 Good communication verbal written documentation skills Responsibilities 1 Support EDH data pipeline workload in the Production environment 2 Monitor job execution 3 Triage troubleshoot job failures and other issues 4 Make code changes to fix production failures and deploy code in production

Vacancy expired!


Report job