We are seeking a talented and motivated Data Engineer to join our dynamic team in Berlin. In this role, you will be responsible for designing, developing, and optimizing scalable data pipelines and architectures that facilitate the integration and processing of large datasets from diverse sources. You will work closely with developers and data scientists to enhance and implement machine learning models, participating in the selection of suitable ML frameworks and tools such as TensorFlow and PyTorch. Additionally, you will design and maintain robust databases and data warehouses, ensuring efficient data storage and processing capabilities. You will lead data-driven projects independently, coordinating with various stakeholders to define project requirements and ensure timely delivery of results. Your expertise will also contribute to the optimization of our data infrastructure for performance, security, and scalability. Monitoring and testing mechanisms will be implemented to ensure data quality, alongside supporting the selection and implementation of appropriate tools and technologies, including cloud data platforms like AWS Redshift, Google BigQuery, and Azure Synapse Analytics.
IT Languages:
Python
Java
Scala
As a Data Engineer, you will undertake the following responsibilities::
Develop, optimize, and maintain scalable data pipelines and architectures;; Build and implement ETL processes for large volume data integration;; Collaborate with developers and data scientists for machine learning model optimization;; Participate in the selection of appropriate ML frameworks and tools;; Design and maintain databases and data warehouses;; Lead data-driven projects, coordinating with stakeholders;; Optimize data infrastructure for performance and security;; Ensure data quality through monitoring and testing;; Support the selection of cloud data platforms and technologies
Spoken Languages:
English;; German
Skillset:
Data pipeline development
ETL processes
Relational databases
NoSQL databases
Cloud platforms
Big Data technologies
Apache Kafka
Apache Airflow
Soft Skills:
Analytical thinking
Problem-solving skills
Effective communication
Collaboration
Time management
Qualifications:
Bachelor’s degree in computer science, Data Science, Engineering, or related field
Several years of professional experience as a Data Engineer, preferably in consulting or a digital environment
Strong understanding of data pipeline development and maintenance
In-depth experience with relational and NoSQL databases, including SQL, PostgreSQL, and MongoDB
Familiarity with cloud data platforms such as AWS, Google Cloud, and Azure
Knowledge of Big Data technologies like Hadoop and Spark, as well as tools like Apache Kafka and Airflow
Proficiency in programming languages such as Python, Java, or Scala
Business fluent in both German and English
Years of Experience:
5
Location:
Berlin, Berlin, Germany, EU
Job Benefits:
Competitive salary
Flexible working hours
Professional development opportunities
Health and wellness programs
Collaborative work environment
Working Conditions:
Full Time
Employment Type:
Permanent Contract
Company Culture:
Our company fosters a culture of innovation and collaboration, where every team member's contribution is valued. We embrace a supportive environment that encourages continuous learning and professional growth. We prioritize work-life balance and strive to create a workplace that is inclusive and diverse.
Opportunities For Advancement:
Career growth within the data engineering team, Opportunities to lead projects, Access to training and certifications