Azure Data bricks Lead (Data Migration Lead)

Detroit, Michigan Contract
  • Post Date: Apr 14, 2026
  • Work Type: Onsite

About the Job

Position Title: Azure Data bricks Lead (Data Migration Lead)

Location : Detroit MI – Onsite

Duration : 24+ Months





Job Overview

We are looking for a highly senior, deeply hands on Databricks Lead to lead a large‑scale Oracle‑to‑Databricks migration, covering schema migration, code conversion, and ODI job modernization. The ideal candidate has extensive experience building enterprise-grade data platforms on Databricks, has executed at least one greenfield Databricks implementation, and is exceptionally strong in PySpark, Spark SQL, framework development, and Databricks Workflows.



Key Responsibilities

Architect, design, and implement cloud-native data platforms using Databricks (ingestion → transformation → consumption).
Lead the full Oracle → Databricks migration including schema translation, ETL/ELT logic modernization, and ODI job replacement.
Develop reusable PySpark frameworks, data processing patterns, and orchestration using Databricks Workflows.
Build scalable, secure, and cost‑optimized Databricks infrastructure and data pipelines.
Collaborate with business and technical stakeholders to drive data modernization strategy.
Establish development best practices, coding standards, CI/CD, and DevOps/DataOps patterns.
Provide technical mentorship and create training plans for engineering teams.
Contribute to building MLOps and advanced operations frameworks.


Required Qualifications

14+ years in Data Engineering/Architecture with at least 4+ years hands-on Databricks experience delivering end‑to‑end cloud data solutions.
Strong experience migrating from Oracle/on‑prem systems to Databricks, including SQL, PL/SQL, ETL logic, and ODI pipelines.
Deep hands-on expertise in:
PySpark, Spark SQL, Delta Lake, Unity Catalog
Building reusable data frameworks
Designing high‑performance batch and streaming pipelines
Proven experience with greenfield Databricks implementations.
Strong understanding of cloud-native architectures on AWS and modern data platform concepts.
Solid knowledge of data warehousing, columnar databases, and performance optimization.
Good understanding of Agile/Scrum development processes.
Bonus: Experience designing Data Products, Data Mesh architectures, Data Vault or enterprise data governance models.
Good Understanding of Golde Gate.

Required Skills

  • Data bricks
  • Oracle → Data bricks Migration
  • PySpark
  • Spark SQL
  • Data Engineering / ETL-ELT
  • Data Framework Development
  • AWS Cloud
  • Delta Lake & Data Lakehouse Concepts
  • Architecture & Leadership
  • Performance Optimization. Azure