Data Engineer (Databricks, Neo4j)

Paris 2025-11-21

Data Engineer (Databricks, Neo4j)

Paris 2025-11-21
Résumé

Localisation

Paris

Catégorie

Type de contrat

Date de publication

2025-11-21

Description du poste

Data Engineer (Databricks, Teradata & Neo4j)Location: Remote (Europe)Experience: 5–7 YearsEmployment Type: Full‑TimeClient Location: Sweden
Position Overview
We are looking for an experienced Data Engineer with strong hands‑on expertise in Databricks, Teradata, and Neo4j to join a leading technology‑driven team in Sweden. This is a remote role, but we require candidates who are currently residing in Europe due to project compliance and collaboration needs. The ideal candidate will have a solid background in building scalable data pipelines, integrating complex data sources, and working with modern data platforms.
Key Responsibilities
Data Engineering & Development

Design, develop, and optimize scalable data pipelines using Databricks (PySpark/Spark)
Build, maintain, and enhance ETL/ELT processes across multiple data environments
Integrate structured and unstructured datasets for downstream analytics and consumption
Develop and optimize data models on Teradata for performance and reliability
Implement graph‑based data solutions using Neo4j

Solution Design & Architecture

Collaborate with solution architects and business teams to understand data needs and design robust solutions
Participate in system design sessions and contribute to architecture improvements
Ensure data quality, validation, and governance throughout the data lifecycle

Performance & Optimization

Troubleshoot and optimize Spark jobs, Teradata SQL queries, and data workflows
Ensure highly available and high‑performance data pipelines
Monitor data operations and automate workflows where possible

Collaboration & Communication

Work with cross‑functional teams including BI, Data Science, and Platform Engineering
Document technical designs, pipelines, and solutions clearly and thoroughly
Communicate effectively with remote stakeholders in a multicultural environment

Required Skills & Qualifications

5–7 years of experience as a Data Engineer
Strong, hands‑on experience with Databricks (Spark, PySpark, Delta Lake)
Mandatory expertise in Neo4j (graph modeling, Cypher queries)
Solid experience with Teradata (SQL, performance tuning, data modelling)
Strong scripting and coding experience in Python
Experience working with cloud platforms (Azure/AWS/GCP) is preferred—Azure is a plus
Strong understanding of ETL/ELT concepts, data modelling, and distributed data processing
Excellent analytical, problem‑solving, and communication skills
Ability to work independently in remote, cross‑cultural teams

Preferred Qualifications

Experience with CI/CD pipelines for data workflows
Knowledge of data governance, data quality frameworks, and metadata management
Exposure to real‑time data processing technologies (Kafka, Event Hub, etc.) is an advantage

Additional Information

Remote role – Europe‑based candidates only due to project requirements
Opportunity to work with a global team on cutting‑edge data technologies


#J-18808-Ljbffr

Partager

Comment postuler

Pour avoir plus d'information et pour postuler, merci de cliquer ci-dessous.