About this role:
We’re currently hiring an experienced Data Engineer to join our client’s new team in their organization. The Data Engineer will be responsible for defining the data lifecycle (including data models and data sources for analytics platforms), and to gather and clean business data from the business in order to provide ready-to-work inputs for Data Scientists.
- Work closely with internal customers to understand their data requirements, develop and model data structure, and design and build the ingestion process to provide access to data from operational and enterprise source systems.
- You will be involved in the design and development of data integration and data pipelines (ETL).
- You will work with various on-prem and cloud-based data platforms, technologies and services .
- You’ll plan and deliver secure, good practice data integration strategies and approaches.
- You’ll work closely with database teams on topics related to data requirements, cleanliness, quality etc.
What you need to bring:
- Bachelor’s degree required; Computer Science, MIS, or Engineering preferred.
- Minimum 4 years of experience working in data engineering or architecture role .
- Strong experience with at least two of the following technologies: Python, Scala, SQL, Java.
- Strong experience in multiple database technologies such as: Distributed Processing (Spark, Hadoop, EMR) Traditional RDBMS (MS SQL Server, Oracle, MySQL, PostgreSQL) Cloud Database Services (DynamoDB, Elasticache for Redis, Snowflake).
- Strong experience in traditional data warehousing / ETL tools (Informatica, Talend, Pentaho, DataStage).
- Demonstrated experience working across structured, semi-structured, and unstructured data.
- Exposure to Cloud platforms such as: AWS, Azure, Google Platform or Databricks.
- Experience in designing and building streaming data ingestion, analysis and processing pipelines using Kafka, Kafka Streams, Spark Streaming and similar cloud native technologies.
- Experience with data processing systems with Hadoop, Spark, Storm, Impala, etc.
- Experience using data virtualization technologies (Denodo).
- Experience deploying applications into production environments e.g. code packaging, integration testing, monitoring, release management.
- Experience in software engineering best practices such as code reviews, testing frameworks, maintainability and readability.
- Strong understanding of understanding of traditional ETL tools & RDBMS, End to End Data Pipeline.
- Knowledge of Data Governance and strong understanding of data lineage and data quality.
- Attitude to thrive in a fun, fast-paced start-up like environment.
- Experience working on a collaborative Agile product team.
- Self-motivated with strong problem-solving and learning skills.
- Flexibility to changes in work direction as the project develops.
- Excellent communication, listening, and influencing skills.
Why work with Brunel? We are proud to offer exciting career opportunities from over 100 offices globally in 42 countries. Advancing your career takes time and effort – let us match you to your ideal position.
Brunel has a reputation for working with some of the best in the business. That’s what we continually strive for. Over 45 years, we’ve created a global network of interesting clients and talented individuals working together through a vast array of services. Join us today.