Mid/Senior Data Engineer

Remote from
Poland flag
Poland
Annual salary
Undisclosed
Salary information is not provided for this position. Check our Salary Directory to estimate the average compensation for similar roles.
Employment type
Full Time,
Job posted
Apply before
28 Mar 2026
Experience level
Senior
Views / Applies
11 / 0

About CodiLime

CodiLime is a software and network engineering company that helps businesses build and scale their IT infrastructure.

Verified job posting
This job post has been manually reviewed for authenticity and compliance.

Get to know us better

CodiLime is a software and network engineering industry expert and the first-choice service partner for top global networking hardware providers, software providers, and telecoms. We create proofs-of-concept, help our clients build new products, nurture existing ones, and provide services in production environments. Our clients include both tech startups and big players in various industries and geographic locations (US, Japan, Israel, and Europe).

While no longer a startup, we have 250+ people on board and have been operating since 2011. We’ve kept our people-oriented culture. Our values are simple:

  • Act to deliver.

  • Disrupt to grow.

  • Team up to win.

The project and the team

 The project is divided into two main parts:

  • A cloud-based platform for data visualization

  • A large-scale dataset combining information from over 10 different data sources

You will spend approximately 70% of your time on data processing activities, contributing to the continuous improvement of the large dataset. The remaining 30% will focus on maintaining the platform, working with the API, and ensuring proper integration with the latest version of the dataset.

The goal of the project is to build a centralized, large-scale business data platform for one of the biggest global consulting firms. The final dataset must be enterprise-grade, providing consultants with reliable, easily accessible information to help them quickly and effectively analyze company profiles during Mergers & Acquisitions (M&A) projects.

You will contribute to building data pipelines that ingest, clean, transform, and integrate large datasets from more than 10 different data sources, resulting in a unified database containing over 300 million company records. The data must be accurate, well-structured, and optimized for low-latency querying. The dataset will power several internal applications, enabling a robust search experience across massive datasets and making your work directly impactful across the organization.

The data will provide firm-level and site-level information, including firmographics, technographics, and hierarchical relationships (e.g., GU, DU, subsidiary, site). This platform will serve as a key data backbone for consultants, delivering critical metrics such as revenue, CAGR, EBITDA, number of employees, acquisitions, divestitures, competitors, industry classification, web traffic, related brands, and more.

Technology stack:

  • Languages: Python, SQL

  • Data Stack: Snowflake + DBT

  • Workflow Orchestration: Apache Airflow (extensive use of complex DAGs)

  • Data Processing: Apache Spark on Azure Databricks

  • Cloud Environment:
    – AWS (EKS, S3, Lambda, ECR, EMR, Opensearch) – Platform
    – Azure (AKS, Blob Storage, Azure Functions, ACR, Databricks, Azure AI Search)
    – Dataset

  • CI/CD: GitHub Actions

  • Future Direction – AI & Advanced Automation
    – Building Agentic AI systems
    – Working with frameworks such as LangChain and cloud-native AI libraries
    – Integrating Azure OpenAI services

  • API: API Gateway, FastAPI (REST, async)

What else you should know:

  • Team Structure:
    – Data Architecture Lead
    – Data Engineers
    – Backend Engineers
    – DataOps Engineers
    – Frontend Engineer
    – Product Owner

  • Work culture:
    – Agile, collaborative, and experienced work environment.
    – As this project will significantly impact the organization, we expect a mature, proactive, and results-driven approach.
    – You will work with a distributed team across Europe and India.

We work on multiple interesting projects at the same time, so it may happen that we’ll invite you to the interview for another project if we see that your competencies and profile are well suited for it.

Your role

As a part of the project team, you will be responsible for:

  • Data Pipeline Development
    – Designing, building, and maintaining scalable, end-to-end data pipelines for ingesting, cleaning, transforming, and integrating large structured and semi-structured datasets
    – Optimizing data collection, processing, and storage workflows
    – Conducting periodic data refresh processes (through data pipelines)
    – Building a robust ETL infrastructure using SQL technologies.
    – Assisting with data migration to the new platform
    – Automating manual workflows and optimizing data delivery

  • Data Transformation & Modeling
    – Developing data transformation logic using SQL and DBT for Snowflake.
    – Designing and implementing scalable and high-performance data models.
    – Creating matching logic to deduplicate and connect entities across multiple sources.
    – Ensuring data quality, consistency, and performance to support downstream applications.

  • Workflow Orchestration
    – Orchestrating data workflows using Apache Airflow, running on Kubernetes.
    – Monitoring and troubleshooting data pipeline performance and operations.

  • Data Platform & Integration
    – Enabling integration of 3rd-party and pre-cleaned data into a unified schema with rich metadata and hierarchical relationships.
    – Working with relational (Snowflake, PostgreSQL) and non-relational (Elasticsearch) databases

  • Software Engineering & DevOps
    – Writing data processing logic in Python.
    – Applying software engineering best practices: version control (Git), CI/CD pipelines (GitHub Actions), DevOps workflows.
    – Ensuring code quality using tools like SonarQube.
    – Documenting data processes and workflows.
    – Participating in code reviews

  • Future-Readiness & Integration
    – Preparing the platform for future integrations (e.g., REST APIs, LLM/agentic AI).
    – Leveraging Azure-native tools for secure and scalable data operations

  • Being proactive and motivated to deliver high-quality work,

  • Communicating and collaborating effectively with other developers,

  • Maintaining project documentation in Confluence.

Do we have a match?

As a Data Engineer, you must meet the following criteria:

  • Strong experience with Snowflake and dbt (must-have). You will spend approximately 70% of your time working with dbt, SQL, Snowflake, and Airflow.

  • Strong SQL skills, including experience with query optimization

  • Experience with orchestration tools like Apache Airflow, Azure Data Factory (ADF), or similar

  • Experience with Docker, Kubernetes, and CI/CD practices for data workflows

  • Experience in working with large-scale datasets

  • Very good understanding of data pipeline design concepts and best practices

  • Experience with data lake architectures for large-scale data processing and analytics

  • Very good coding skills in Python
    – Ability to write clean, scalable, and testable code (including unit tests)
    – Understanding and applying object-oriented programming (OOP)

  • Experience with version control systems: Git

  • Good knowledge of English (minimum C1 level)

Beyond the criteria above, we would appreciate the nice-to-haves:

  • Experience with data processing frameworks, such as Apache Spark (ideally on Azure Databricks)

  • Experience with GitHub Actions for CI/CD workflows

  • Experience with API Gateway, FastAPI (REST, async)

  • Experience with Azure AI Search or AWS OpenSearch

  • Familiarity with designing and developing ETL/ELT processes (a plus)

  • Optional but valuable: familiarity with LLMs, Azure OpenAI, or Agentic AI systems

More reasons to join us

  • Flexible working hours and approach to work: fully remote, in the office, or hybrid

  • Professional growth supported by internal training sessions and a training budget

  • Solid onboarding with a hands-on approach to give you an easy start

  • A great atmosphere among professionals who are passionate about their work

  • The ability to change the project you work on

Apply now >

Annual salary information is not provided for this position. Explore salary ranges for similar roles in our Salary Directory ›

This job listing has been manually reviewed by the Jobicy Trust & Safety Team for compliance with our posting guidelines, including verification of the company's legitimacy, accuracy of job details, clarity of remote work policy, and absence of misleading or fraudulent content.

How to apply

Did you apply? Let us know, and we’ll help you track your application.

See a few more

Similar Data Science & Analytics remote jobs

Job Search Safety Tips

Here are some tips to help you search and apply for jobs safely:
Watch out for suspicious jobs Don't apply for jobs that offer high pay for little work or offer to hire you without an interview. Read more ›
Check the employer's profile Make sure you're applying for a trustworthy job by visiting the employer's profile and learning more about them. Read more ›
Protect your information Don't share personal details like your bank account or government-issued ID on suspicious websites or messengers. Read more ›
Report jobs that feel unsafe If you see a job that seems misleading, inappropriate or discriminatory, report it for going against our policies and we'll review it.

Share this job

Jobicy+ Subscription

Jobicy

592 professionals pay to access exclusive and experimental features on Jobicy

Free

USD $0/month

For people just getting started

  • • Unlimited applies and searches
  • • Access on web and mobile apps
  • • Weekly job alerts
  • • Access to additional tools like Bookmarks, Applications, and more

Plus

USD $8/month

Everything in Free, and:

  • • Ad-free experience
  • • Daily job alerts
  • • Personal career consultant
  • • AI-powered job advice
  • • Featured & Pinned Resume
  • • Custom Resume URL
Go to account ›