Senior Software Engineer – Data & Analytics

Senior Software Engineer – Data & Analytics

Location

  • Charlotte, NC (preferred)
  • Dallas, TX (Irving) – secondary option
  • Hybrid (3 Days in and 2 days remote)

Duration

  • 12-month contract (potential to extend) W2 ONLY NO C2C

Overview

We are seeking a Senior Software Engineer – Data & Analytics to design and build scalable, enterprise-grade data pipelines. This role goes beyond hands-on development—you will take full ownership of deliverables, proactively communicate risks, and drive initiatives from concept through production.

This is an ideal opportunity for engineers who thrive in fast-paced, large-scale environments and enjoy influencing architecture, improving performance, and leading through action.


Key Responsibilities

  • Design, build, and maintain high-performance, scalable data pipelines
  • Develop solutions using Python, PySpark, and SQL
  • Own deliverables end-to-end: design, development, testing, and production support
  • Translate business requirements into data-driven technical solutions
  • Improve pipeline reliability, scalability, and performance
  • Partner with product managers and cross-functional teams
  • Proactively identify risks, communicate status, and escalate issues
  • Contribute to CI/CD pipelines, code quality, and security remediation
  • Mentor team members and collaborate within Agile environments

Required Qualifications

  • 7+ years of experience in Software Engineering
  • 4+ years building big data pipelines
  • 4+ years of experience with:
    • Apache Spark (PySpark / Spark SQL)
    • Hive and Iceberg tables
    • SQL / SQL Server or other RDBMS
  • Strong programming experience in:
    • Python
    • PySpark / Spark SQL
    • Scala
    • Bash / Shell scripting
  • Experience with CI/CD pipelines, testing, and code quality practices
  • 2+ years working in Agile (Scrum or SAFe) environments

Preferred Qualifications

  • Experience in financial services or large enterprise environments
  • Familiarity with:
    • REST APIs
    • Dremio
    • Object storage solutions
  • Experience with streaming/event-driven architectures (Kafka)
  • Workflow orchestration tools such as:
    • Airflow
    • Oozie
    • Autosys

Key Skills

  • SQL
  • Python
  • PySpark
  • Data Pipelines / Big Data Processing

What We’re Looking For

  • Strong communication skills—clear, concise, and proactive
  • Proven ability to own and drive projects to completion
  • Self-starter who identifies issues and takes action
  • Experience leading initiatives, not just executing tasks
  • Comfortable working in enterprise-scale, fast-paced environments

Why Join

This role offers the opportunity to:

  • Own impactful, large-scale data initiatives
  • Influence architecture and engineering best practices
  • Work in a highly collaborative, fast-moving environment
  • Gain visibility with senior stakeholders

Reference JOB-245928

Company Not Specified

Job type Contract

Apply now

"*" indicates required fields

Accepted file types: doc, docx, pdf, txt, Max. file size: 3 MB.