Python Developer

About the job

We are looking for an experienced Python PySpark Developer with expertise in Big Data technologies and AWS Cloud to join our team in Hyderabad. The ideal candidate will have over 5 years of hands-on experience in building, optimizing, and deploying big data solutions in a cloud environment.

Key Responsibilities

  • Develop and optimize big data pipelines and ETL workflows using Python, PySpark, and AWS services.
  • Process, analyze, and transform large datasets using distributed computing frameworks like Apache Spark.
  • Design and implement data solutions leveraging AWS services such as S3, EMR, Glue, Redshift, and Athena.
  • Monitor and optimize the performance of data pipelines and processing jobs.
  • Collaborate with data engineers, analysts, and other stakeholders to meet business requirements.

Skills & Qualifications

  • 5+ years of experience in Python, PySpark, and Big Data ecosystems (Hadoop, Spark).
  • Expertise in AWS cloud services (S3, EMR, Glue, Lambda, Redshift, etc.).
  • Strong understanding of distributed computing, data partitioning, and parallel processing.
  • Experience with SQL and NoSQL databases.
  • Proficiency in building scalable, fault-tolerant data pipelines.

Preferred

  • Experience with CI/CD pipelines, Docker, and Terraform.
  • AWS certification is a plus.

We are seeking a skilled Python Developer with strong expertise in Amazon Web Services (AWS) to join our dynamic team. The ideal candidate will be responsible for designing, developing, and deploying scalable applications using Python and AWS services.

Apply for this position

Allowed Type(s): .pdf, .doc, .docx