AWS BigData Architect (Databricks)

Brak informacji o wynagrodzeniu
SeniorFull-time
#332327·Dodano 11 dni temu·18
Źródło: nofluffjobs.com
Aplikuj teraz

Tech Stack / Keywords

Data pipelinesUNITYJavaPythonScalaApache AirflowApache KafkaKinesisKafkaSQLGitHubSnowflakeData warehousesAmazon RedshiftBig DataStorageApache SparkDatabricksAWS

Firma i stanowisko

SoftServe is a global digital solutions company headquartered in Austin, Texas, founded in 1993. The Big Data and Analytics Center of Excellence is a data consulting and engineering branch with hundreds of data engineers and architects building end-to-end data and analytics solutions for customers in healthcare, finance, manufacturing, retail, and energy domains. The company holds top-level partnership statuses with major cloud providers and collaborates with technology partners like AWS, GCP, Microsoft, Databricks, Snowflake, and Confluent.


Wymagania

  • Architect specializing in data pipeline creation
  • Experienced with both batch and streaming data processing
  • Ready to lead a team of data engineers
  • Skilled in developing scalable data solutions on AWS with a strong focus on Databricks Platform
  • Hands-on with Databricks on AWS for building and managing data pipelines and lakehouse architectures (including Delta Lake, Unity Catalog, Workflows, and Jobs)
  • Familiar with Python, Scala, or Java (preferably Python) and SQL for data manipulation and querying
  • Knowledgeable in big data technologies such as Apache Spark or Flink for advanced data processing
  • Familiar with orchestration tools like Databricks Workflows, Apache Airflow or Managed Apache Airflow (MWAA) for scheduling workflows
  • Experienced with data streaming platforms like Apache Kafka, Amazon Managed Streaming for Apache Kafka (MSK), or Kinesis for real-time data processing
  • Familiar with Kafka, streaming concepts, Avro file format, SQL, GitHub, and Snowflake
  • Adept at utilizing data warehouses such as Amazon Redshift or Snowflake for storage and analytics
  • Proficient in translating customer requirements into development tasks, estimating timelines, and guiding teams toward project completion

Obowiązki

  • Own source-to-target mappings, drive discovery of new data sources, and coordinate with business stakeholders on data ingestion
  • Assist the Data Engineering Team with clear implementation scope definitions
  • Participate in architecture decision-making and scale the data platform by planning new data ingestion pipelines and maintaining the data model
  • Design and implement Databricks-based data solutions within AWS environments
  • Drive requirements gathering with business and tech stakeholders and assist with data modeling for systems using both Snowflake and Kafka
  • Create and maintain documentation for the data schema and model
  • Work in a team of data-focused engineers dedicated to continuous learning, improvement, and knowledge-sharing
  • Be involved in the full project lifecycle, from initial design and proof of concepts (PoCs) to minimum viable product (MVP) development and full-scale implementation
  • Investigate new technologies, build internal prototypes, and share knowledge with the SoftServe Big Data Community

Oferta

  • Sport subscription
  • Training budget
  • Private healthcare
  • International projects
  • Flat structure
  • Small teams
  • Free coffee
  • Canteen
  • Bike parking
  • Free snacks
  • Free parking
  • In-house trainings
  • Modern office
  • No dress code
Karta sportowa
Dofinansowanie szkoleń
Opieka zdrowotna
Szkolenia wewnętrzne
Stołówka
Parking rowerowy
Darmowe przekąski
Darmowe napoje
Parking dla aut

Inne informacje

SoftServe is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment regardless of race, color, religion, age, sex, nationality, disability, sexual orientation, gender identity and expression, veteran status, and other protected characteristics under applicable law.

SoftServe

SoftServe

12 aktywnych ofert

Zobacz wszystkie oferty
Aplikuj teraz