Data Engineer
Tech Stack / Keywords
Firma i stanowisko
Data Engineer will work for a global leader in gaming, who deliver entertaining and responsible gaming experiences for players across all channels and regulated segments, from Gaming Machines and Lotteries to Sports Betting and Digital. Leveraging a wealth of compelling content, substantial investment in innovation, player insights, operational expertise, and leading-edge technology, company’s solutions deliver unrivaled gaming experiences that engage players and drive growth. The company has a well-established local presence and relationships with governments and regulators in more than 100 countries around the world, creating value by adhering to the highest standards of service, integrity, and responsibility.
Wymagania
- Strong proficiency in Python for data processing and pipeline development.
- Advanced knowledge of SQL (CTEs, window functions, query optimization).
- Experience with data modeling:
- relational schemas (3NF).
- dimensional modeling (star/snowflake).
- Hands-on experience with relational databases (e.g., PostgreSQL, MySQL, SQL Server, Oracle).
- Experience with data warehousing platforms (e.g., Snowflake, BigQuery, Redshift, Synapse).
- Understanding of ETL vs ELT, batch vs near real-time processing.
- Experience building and managing data pipelines and orchestration tools (e.g., Airflow, Prefect, Dagster, Azure Data Factory).
- Experience with at least one cloud platform (AWS, Azure, or GCP).
- Knowledge of data quality, validation, and governance practices.
- Familiarity with DevOps practices:
- Git / version control.
- CI/CD pipelines.
- Infrastructure as Code (e.g., Terraform, CloudFormation, ARM).
- Understanding of security best practices (e.g., PII handling, RBAC, encryption).
Nice to have:
- Spark (PySpark/Spark SQL).
- Hadoop basics.
- Streaming (Kafka, Kinesis, Pub/Sub, Event Hubs).
- Java/Scala.
- Bash.
- dbt.
- BI (Power BI, Tableau, Looker).
- APIs/SaaS integration.
- FinOps (cost optimization).
- Data governance/GRC.
- 5 years of experience in a similar position.
- Knowledge of Python, SQL, Data modeling, Relational databases, Data warehouse, ETL, ELT, Data pipelines and orchestration tools, Cloud (Azure/AWS/GCP), DevOps practices.
- English language proficiency.
Obowiązki
- Design, build, and maintain production-grade data pipelines (ETL/ELT).
- Develop scalable solutions for processing large datasets.
- Model and structure data for analytics and reporting (fact/dimension models).
- Optimize SQL queries and data storage for performance and cost efficiency.
- Design and implement end-to-end data architectures (batch and near real-time).
- Build and manage data workflows and orchestration pipelines (e.g., scheduling, dependencies, retries).
- Ensure data quality, validation, and reliability across pipelines.
- Implement monitoring, logging, and failure recovery mechanisms.
- Work with cloud platforms to build secure and scalable data solutions.
- Collaborate with analysts, data scientists, and other engineers to support data needs.
- Apply software engineering best practices (version control, CI/CD, testing).
- (Optional) Develop and maintain streaming / real-time data pipelines.
Oferta
- 20.2k–25.2k PLN netto/month (B2B)
- B2B - Fixed working hours (100%)
- Fully remote work possible
- Medical package
PTT Consulting
27 aktywnych ofert