Data Engineer

Data Engineer

Contract Type:

Contract

Location:

Portland - Maine

Industry:

Tech Team Boston

Contact Name:

Mike McKeon

Contact Email:

mmckeon@dewintergroup.com

Contact Phone:

339 777 4013

Date Published:

07-01-2025

Salary:

$75.00 - $85.00 Hourly

Job ID:

BH-37047

Requisition Title:   Senior Data Engineer
Location: Hybrid- would prefer local but will look at strong remote resumes 
Experience Level- Senior

TOP (3) REQUIRED SKILLSETS:
•       Experience building data services and ETL pipelines in the cloud using infrastructure-as-code and large-scale data processing engines (i.e. Spark)
•       Strong experience with SQL and one or more programming languages (Python preferred)
•       Strong problem-solving skills and the ability to operate independently

NICE TO HAVE SKILLSETS:
•       Experience with data modeling, including data warehouse and data lake concepts
•       Experience with relational and non-relational databases (document stores, key-value stores)
•       Experience with AWS technologies such as S3, Glue, Lambda, Kinesis, IAM
•       Experience with Snowflake
•       Knowledge of software and data engineering best practices related to the SDLC, version control, and CI/CD


We are seeking a highly motivated and experienced Senior Data Engineer to join our team and support the accelerated advancement of our client's global LIMS program initiatives. This role is pivotal in developing changes to data pipelines that deliver diagnostic results to data consumers in near real-time.

Reporting and Collaboration
This position reports directly to the Development Manager while working on an Agile team and involves collaboration with cross-functional teams including engineering, product, quality, and customer-facing software.

Key Responsibilities
•       Design and implement scalable serverless data solutions using AWS services such as Glue and Lambda.
•       Collaborate with cross-functional teams to understand and meet data needs for near real-time processing.
•       Develop and maintain scalable and reliable data solutions that support client's operational and business requirements.
•       Document data flows, architecture decisions, and metadata to ensure maintainability and knowledge sharing.
•       Design and implement fault-tolerant systems, ensuring high availability and resilience in our data processing pipelines.
•       Actively participate in testing and quality engineering (QE) processes, collaborating closely with the QE team to ensure the reliability and accuracy of data solutions.

Required Skills
•       Strong problem-solving skills and the ability to operate independently, sometimes with limited information
•       Strong communication skills, both verbal and written, including the ability to communicate complex issues to both technical and non-technical users in a professional, positive, and clear manner.
•       Initiative and self-motivation with strong planning and organizational skills.
•       Ability to prioritize and adapt to changing business needs.
•       Proven experience in building and maintaining operational data pipelines, particularly with streaming technologies like AWS Kinesis.
•       Strong proficiency in Apache Spark, Python, and Scala.
•       Strong background in AWS cloud services, with a focus on serverless architectures for data processing (Glue, Lambda, etc.).
•       Familiarity with a broad range of technologies, including:
o       Cloud-native data processing and analytics
o       SQL and NoSQL databases (Oracle, PostgreSQL, MySQL, DynamoDB, MongoDB)
o       Scripting and programming with Python and Scala, or similar languages
•       Ability to translate complex business requirements into scalable and efficient data solutions.
•       Strong multitasking skills and the ability to prioritize effectively in a fast-paced environment.

Preferred Background
•       Candidates should have a minimum of five years of experience in a similar role, preferably within a technology-driven environment.
•       Experience building data services and ETL pipelines in the cloud using infrastructure-as-code and large-scale data processing engines (i.e., Spark)
•       Strong experience with SQL and one or more programming languages (Python and/or Scala preferred)

Success Metrics
•       Meeting delivery timelines for project milestones.
•       Effective collaboration with cross-functional teams.
•       Ensure high standards of data accuracy and accessibility in a fast-paced, dynamic environment.
•       Reduction in data pipeline failures or downtime through resilient and fault-tolerant design.
•       Demonstrated contribution to the stability and scalability of the platform through well-architected, maintainable code.
•       Positive feedback from stakeholders (engineering, product, or customer-facing teams) on delivered solutions.
•       Active contribution to Agile ceremonies and improvement of team velocity or estimation accuracy.
•       Proactive identification and mitigation of data-related risks, including security or compliance issues.

DeWinter Group and Maris Consulting  is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.  We post pay scales which are based on our client pay ranges. DeWinter, Maris, and our clients have the right to modify the requirements of the role which can impact the pay ranges posted.

APPLY NOW

Share this job

Interested in this job?
Save Job
Create As Alert

Similar Jobs

Read More
SCHEMA MARKUP ( This text will only show on the editor. )