Summary
As a Hadoop Data Engineer/Software Engineer, you will be responsible for designing, developing, and supporting large-scale data solutions.
The position focuses on building data pipelines, implementing scalable architectures, and ensuring system performance in a dynamic, Agile, and DevOps-driven environment.
Company : HSBC
Position : Data Engineer
Experience : freshers candidates apply
Qualification : degree
Location : Pune
Key Responsibilities
- Design and develop scalable applications using Scala, Spark, and Hadoop ecosystem components.
- Build and optimize data pipelines leveraging technologies such as Hive, MapReduce, and YARN.
- Implement automated testing, deployment, and monitoring processes.
- Collaborate with business analysts to translate requirements into technical solutions.
- Participate in sprint planning, code reviews, and retrospectives.
- Provide production support and troubleshooting, ensuring availability, accuracy, and performance of applications.
- Work closely with architects and senior developers on system design, data modeling, and solution enhancements.
Required Skills
- Proficiency in Scala (2.10+) or Java (1.8+) with experience in application design and development.
- Hands-on experience with Apache Spark, Hive, Kafka, Spark Streaming, Hadoop, ETL frameworks, and SQL.
- Strong knowledge of Unix/Linux systems and debugging techniques.
- Experience with Git, GitHub, Jenkins, Ansible, and JIRA for version control, CI/CD, and requirement management.
- Understanding of data modeling using relational and non-relational concepts.
- Familiarity with scheduling tools such as Airflow or Control-M.
- Exposure to time-series or analytics databases like Elasticsearch.
- Awareness of cloud design patterns, DevOps practices, and Agile methodologies (Scrum/Kanban).
This position offers the chance to work on innovative data engineering projects within one of the world’s leading financial institutions, providing exposure to cutting-edge big data frameworks, cloud practices, and DevOps culture.