Title:
Data Engineer (AI Pipelines)
Job Type:
Contract
Contract Length:
12 Months
Pay Range:
$50/hr – $175/hr
Start Date:
ASAP
Location:
Remote
About the Opportunity:
Our client, a leader in AI testing, is looking for a skilled Data Engineer (AI Pipelines)
to join their team for a 12-month engagement. This project involves building scalable ETL/ELT pipelines to ingest, clean, and transform massive datasets for AI training, inference, and low-latency real-time applications. This is a high-impact role that requires a self-motivated professional who can hit the ground running and deliver results quickly.
Key Responsibilities & Deliverables:
This role is focused on the successful completion of specific tasks and deliverables. Your responsibilities will include:
- Building scalable ETL/ELT pipelines to ingest, clean, and transform massive datasets for AI training and inference.
- Implementing data quality checks and automated validation to prevent "garbage in, garbage out" in AI systems.
- Managing the storage and versioning of large datasets using tools like DVC or Snowflake.
- Optimizing data retrieval patterns for low-latency RAG systems and real-time model serving.
- Collaborating with ML engineers to ensure data features are consistent across training and production.
We are looking for someone with a proven track record of successful contract engagements. The ideal candidate will have:
- 4+ years of experience in Data Engineering.
- Deep expertise in SQL, Spark, Python, and modern data stack tools (Airflow, dbt). This isn't a learning role—you need to be a subject matter expert.
- Demonstrated ability to work autonomously and manage your own time effectively to meet project goals.
- Experience with cloud data warehouses (Snowflake, BigQuery) and Git.
- Strong communication skills to provide clear and concise status updates to the project team.





