Senior Data Engineer

Remote: 
Full Remote
Contract: 
Work from: 

Offer summary

Qualifications:

5+ years of experience in data engineering with a focus on production-grade data pipelines and data warehouses., Proficiency in SQL and Python for data manipulation and pipeline development., Hands-on experience with cloud-based data warehousing solutions, preferably Snowflake., Bachelor's degree in Computer Science, Engineering, or a related field..

Key responsibilities:

  • Design, build, test, and maintain scalable data pipelines for our Snowflake data warehouse using Fivetran, Python, and AWS services.
  • Develop and optimize data models and schemas to support analytical and operational needs.
  • Collaborate with data teams to understand requirements and deliver effective data solutions.
  • Implement data security and compliance best practices within data pipelines and automate workflows to improve efficiency.

finally logo
finally Fintech: Finance + Technology Scaleup https://www.finally.com/
51 - 200 Employees
See all jobs

Job description

About finally

finally is one of America’s fastest-growing and most exciting fintech companies, focused on being the premier financial automation platform for SMBs. Our innovative product suite integrates Credit & Banking, Billing & Invoicing, Bookkeeping, and Taxes, all harmonized through cutting-edge artificial intelligence to aid Small to Medium-sized businesses. Finally aims to declutter financial operations, providing businesses with a seamless financial journey, allowing them to focus on what truly matters – their growth.

We’re headquartered in sunny South Florida and we raised $200 million dollars just in 2024 to bolster our growth, to innovate, and to continue to serve our customers. Our company has more than 250 individuals today across 3 offices. We’re proud to serve as the official corporate card and spend management platform for iconic sports franchises like the Florida Panthers, Miami Heat, and Chicago Bulls.

The Opportunity

As a Senior Data Engineer at finally.com, you will play a crucial role in building and scaling the data infrastructure that powers our analytics, product features, and business operations. You will be responsible for designing, developing, and maintaining resilient and efficient data pipelines, ensuring our Snowflake data warehouse is optimized for performance and reliability. This is a chance to make a significant impact by laying the technical foundations for a data-driven organization in a dynamic, high-growth environment.

What You'll Do

  • Design, build, test, and maintain scalable and reliable data pipelines to ingest data from various sources into our Snowflake data warehouse using Fivetran, Python, and AWS services (e.g., Lambda, S3, Kinesis).

  • Develop and optimize data models and schemas within Snowflake to support analytical and operational needs.

  • Implement and manage ETL/ELT processes, ensuring data quality, consistency, and timeliness.

  • Work extensively with dbt to build, test, and deploy robust data transformations.

  • Monitor data pipeline performance and data quality, implementing alerting and troubleshooting issues proactively, leveraging advanced analytics and intelligent automation where applicable.

  • Collaborate with the Founding Data Lead, Analytics Engineers, and Data Analysts to understand data requirements and deliver effective data solutions.

  • Implement data security and compliance best practices within our data pipelines and data warehouse.

  • Automate data engineering workflows and processes to improve efficiency and reduce manual intervention, including the exploration and application of AI-enhanced tooling and methodologies.

  • Create and maintain comprehensive documentation for data pipelines, data models, and processes.

  • Participate in code reviews and contribute to a culture of engineering excellence.

What You'll Need

  • 5+ years of experience in data engineering, with a strong focus on building and maintaining production-grade data pipelines and data warehouses.

  • Proficiency in SQL and Python for data manipulation, pipeline development, and scripting.

  • Hands-on experience with cloud-based data warehousing solutions, preferably Snowflake.

  • Experience with ETL/ELT tools and frameworks (e.g., Fivetran, Airflow, or similar).

  • Experience with data transformation tools, particularly dbt.

  • Strong understanding of data modeling concepts (e.g., dimensional modeling, normalization).

  • Experience with AWS cloud services relevant to data engineering (S3, EC2, Lambda, IAM, etc.).

  • Familiarity with version control systems (Git) and CI/CD practices for data pipelines.

  • Excellent problem-solving skills and attention to detail.

  • Strong communication skills and ability to work effectively in a collaborative team environment.

  • Bachelor's degree in Computer Science, Engineering, or a related field.

Bonus Points

  • An interest in or experience with leveraging ML/AI technologies to enhance data quality, observability, pipeline optimization, or to unlock new data capabilities

  • Experience in a Fintech startup or high-growth technology company.

  • Knowledge of data governance and data security best practices.

  • Experience with real-time data streaming technologies (e.g., Kafka, Kinesis).

  • Familiarity with infrastructure-as-code tools (e.g., Terraform).

Tech Stack You'll Work With

  • Cloud: AWS (S3, Lambda, EC2, RDS, etc.)

  • Data Warehouse: Snowflake

  • Data Ingestion: Fivetran, Python

  • Data Transformation: dbt, SQL

  • Programming: Python, SQL

  • Version Control: Git

Benefits

  • Health insurance

  • Dental insurance

  • Employee stock purchase plan

  • Paid time off

  • Paid training

  • Vision insurance

Required profile

Experience

Industry :
Fintech: Finance + Technology
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Collaboration
  • Communication
  • Problem Solving

Data Engineer Related jobs