Flink Data Engineer
140 - 170 PLN/ godz.B2B (netto)
SeniorFull-time·B2B
#332428·Dodano 11 dni temu·15
Źródło: DevireTech Stack / Keywords
BigQueryLookerSQLAIGitHubApacheAPICloud
Firma i stanowisko
Devire Outsourcing IT is a collaboration model dedicated to IT specialists based on B2B contracts, delivering projects for clients running innovative and modern projects. The client is a consulting and technology company specializing in digital transformation, software engineering, cloud, and data.
Wymagania
- Minimum 5 years of experience in data engineering, analytics engineering, or related fields
- Practical experience with Google BigQuery and Looker (LookML) in production environments
- Very good knowledge of SQL
- Experience in data modeling
- Knowledge of dbt or similar data transformation tools
- Good understanding of data governance, lineage, and data documentation
- Communication and collaboration skills
- Practical experience using AI assistants (e.g., Claude Code, GitHub Copilot, Cursor) to enhance productivity, quality, or decision-making in software development
- Practical experience with Apache Flink (including DataStream API)
- Experience in maintaining and updating Flink environments (experience with Flink 2.0 is a plus)
- Deep understanding of streaming pipeline architectures
- Knowledge of performance optimization, state management, and fault tolerance mechanisms
- Experience migrating large datasets from BigQuery to Data Cloud Storage
- Very good knowledge of data format conversion (especially Avro to Parquet)
- Ability to design, scale, and automate migration processes
- Attention to data integrity and minimizing downtime
- Good knowledge of Google Cloud Platform (GCP) and its data services
- Understanding of distributed systems
- Knowledge of schema evolution and data storage optimization
- Ability to break down complex problems into concrete, actionable steps
- Proactivity and sense of responsibility for solutions
- Ability to identify risks
Obowiązki
- Development and maintenance of real-time data processing pipelines using Apache Flink
- Migration of existing Flink jobs (DataStream API) and adaptation to new platform standards
- Planning and execution of Apache Flink platform upgrade to version 2.0
- Designing and optimizing efficient, scalable, and fault-tolerant streaming architectures
- Migration of large datasets from BigQuery (BQ) to Data Cloud Storage (DCS)
- Automation and scaling of data migration processes to handle increasing volumes
- Data conversion (Avro to Parquet) considering performance, schema evolution, and storage optimization
- Use of AI tools to improve migration, validation, and data transformation processes
- Ensuring high data quality and integrity and minimizing system downtime
- Monitoring and optimizing pipeline and streaming platform performance
- Collaboration with technical and business (cross-functional) teams
- Communicating technical issues clearly to non-technical stakeholders
Oferta
- 100% remote work
- Benefits package including medical care and multisport card
- Long-term cooperation
Opieka zdrowotna
Karta sportowa
Devire
162 aktywne oferty