Hiring For: An early stage Data focused products & services company, helping large organisations solve complex data challenges.
Title: Azure Databricks Engineer
Key role: Azure Databricks Engineer with hands on experience in performance optimization techniques.
Location: Bengaluru - Hybrid
Job Type: Contract To Hire (min. 3 to 6 months contract, and conversion based on performance)
About client:
Our client is a dynamic and innovative company focused on leveraging data to drive business insights and
growth. They are committed to providing high-quality solutions to their clients and are looking for a talented
Azure Databricks Engineer to join their team.
Overview:
Seeking an experienced Azure Databricks Engineer to join our client's Engineering team. The ideal candidate will have
a strong background in Azure Databricks, Big Data processing, and optimization of Databricks jobs. This role involves
designing, developing, and optimizing data processing pipelines and ensuring that our client's Databricks environment operates efficiently.
Key Responsibilities:
Design and Develop: Create and maintain scalable data processing pipelines using Databricks and Apache Spark.
Optimize Azure Databricks Jobs: Identify performance bottlenecks and optimize Databricks jobs for efficiency and cost-effectiveness.
Data Integration: Integrate Databricks with various data sources and sinks, including cloud storage, databases, and APIs.
Monitoring and Troubleshooting: Monitor Databricks jobs and clusters, troubleshoot issues, and implement solutions to improve reliability.
Collaboration: Work closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions.
Best Practices: Implement and advocate for best practices in data engineering, including code quality, version control, testing, and documentation.
Security and Compliance: Ensure data security and compliance with relevant regulations and standards.
Innovation: Stay up-to-date with the latest industry trends and technologies to continuously improve our data infrastructure.
Must have:
Minimum of 4 to 6 years of experience in data engineering or a related field.
Proven experience with Azure Databricks, Azure Data Factory and Apache Spark.
Experience optimizing Azure Databricks jobs and clusters for performance and cost.
Technical Skills:
Proficiency in Python or Scala.
Strong SQL skills and experience with ETL processes.
Experience with cloud platforms (AWS, Azure, or GCP).
Knowledge of data warehousing solutions (e.g., Snowflake, Redshift).
Familiarity with version control systems (e.g., Git).
Experience with CI/CD pipelines and automation tools.
Soft Skills:
Strong problem-solving and analytical skills.
Excellent communication and collaboration abilities.
Ability to work independently and as part of a team.
Attention to detail and a commitment to quality.
Preferred Qualifications
Experience with Delta Lake and Databricks Delta.
Knowledge of streaming data processing (e.g., Spark Streaming, Kafka).
Certification in Azure Databricks or cloud platforms.
Education: Bachelors or Masters degree in Computer Science, Engineering, or a related field.,