- The Client is seeking a skilled and experienced Python Developer with a strong background in PySpark and AWS cloud services.
- The ideal candidate will be responsible for developing, optimizing, and managing data processing pipelines and backend services in a cloud environment, ensuring robust and scalable solutions.
Key Responsibilities:
- Design, develop, and maintain data-driven applications using Python and PySpark.
- Build and optimize large-scale data pipelines for ETL and real-time streaming use cases.
- Implement and manage cloud-based solutions on AWS
- Collaborate with data engineers, architects, and analysts to integrate data from various sources.
- Monitor and troubleshoot application performance and data quality issues.
- Write clean, efficient, and well-documented code following best practices.
- Ensure secure and efficient data storage and retrieval across cloud environments.
- Participate in code reviews and contribute to team knowledge sharing.
Required Skills:
- Strong proficiency in Python programming.
- Hands-on experience with PySpark for distributed data processing.
- Solid understanding of AWS services
- Experience working with data lakes, data warehouses, and large datasets.
- Proficiency in writing complex SQL queries and performance tuning.
- Familiarity with CI/CD tools and version control (e.g., Git).
- Strong analytical and problem-solving skills.
For more details - please apply to this opportunity...!!!
Job Type: Full-time
Expected hours: 40 per week
Schedule:
Work Location: In person